text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
2013-11-09 23:11:49 8 Comments
I have read articles about the differences between SOAP and REST as a web service communication protocol, but I think that the biggest advantages for REST over SOAP are:
REST is more dynamic, no need to create and update UDDI(Universal Description, Discovery, and Integration).
REST is not restricted to only XML format. RESTful web services can send plain text/JSON/XML.
But SOAP is more standardized (E.g.: security).
So, am I correct in these points?
Related Questions
Sponsored Content
17 Answered Questions
[SOLVED] What is the maximum length of a URL in different browsers?
23 Answered Questions
[SOLVED] How do I POST JSON data with Curl from a terminal/commandline to Test Spring REST?
- 2011-08-24 08:51:11
- kamaci
- 2638237 View
- 2609 Score
- 23 Answer
- Tags: json rest spring-mvc curl http-headers
34 Answered Questions
[SOLVED] PUT vs. POST in REST
9 Answered Questions
[SOLVED] How to pass "Null" (a real surname!) to a SOAP web service in ActionScript 3
- 2010-12-16 00:42:14
- bill
- 915387 View
- 4605 Score
- 9 Answer
- Tags: apache-flex actionscript soap coldfusion wsdl
31 Answered Questions
32 Answered Questions
[SOLVED] What exactly is RESTful programming?
- 2009-03-22 14:45:39
- hasen
- 1636210 View
- 3921 Score
- 32 Answer
- Tags: http rest definition
@Jose Manuel Gomez Alvarez 2018-05-23 15:41:13
Among many others already covered in the many answers, I would highlight that SOAP enables to define a contract, the WSDL, which define the operations supported, complex types, etc. SOAP is oriented to operations, but REST is oriented at resources. Personally I would select SOAP for complex interfaces between internal enterprise applications, and REST for public, simpler, stateless interfaces with the outside world.
@Premraj 2015-12-08 23:38:04
REST(REpresentational State Transfer)
REpresentational State of an Object is Transferred is REST i.e. we don't send Object, we send state of Object. REST is an architectural style. It doesn’t define so many standards like SOAP. REST is for exposing Public APIs(i.e. Facebook API, Google Maps API) over the internet to handle CRUD operations on data. REST is focused on accessing named resources through a single consistent interface.
SOAP(Simple Object Access Protocol)
SOAP brings its own protocol and focuses on exposing pieces of application logic (not data) as services. SOAP exposes operations. SOAP is focused on accessing named operations, each operation implement some business logic. Though SOAP is commonly referred to as web services this is misnomer. SOAP has a very little if anything to do with the Web. REST provides true Web services based on URIs and HTTP.
Why REST?
application/xmlor
application/jsonfor POST and
/user/1234.jsonor GET
/user/1234.xmlfor GET.
Why SOAP?
source1
source2
@Santiago Martí Olbrich 2016-02-27 20:30:57
REST verbs/methods don't have a 1 to 1 relation to CRUD methods although, it can help in the beginning to understand the REST style.
@Mou 2016-11-07 14:33:43
REST does not support SSL ? the uniform resource url for rest can not be start with https:// ?
@blue_note 2018-08-15 18:19:17
There are already technical answers, so I'll try to provide some intuition.
Let's say you want to call a function in a remote computer, implemented in some other programming language (this is often called remote procedure call/RPC). Assume that function can be found at a specific URL, provided by the person who wrote it. You have to (somehow) send it a message, and get some response. So, there are two main questions to consider.
For the first question, the official definition is WSDL. This is an XML file which describes, in detailed and strict format, what are the parameters, what are their types, names, default values, the name of the function to be called, etc. An example WSDL here shows that the file is human-readable (but not easily).
For the second question, there are various answers. However, the only one used in practice is SOAP. Its main idea is: wrap the previous XML (the actual message) into yet another XML (containing encoding info and other helpful stuff), and send it over HTTP. The POST method of the HTTP is used to send the message, since there is always a body.
The main idea of this whole approach is that you map a URL to a function, that is, to an action. So, if you have a list of customers in some server, and you want to view/update/delete one, you must have 3 URLS:
myapp/read-customerand in the body of the message, pass the id of the customer to be read.
myapp/update-customerand in the body, pass the id of the customer, as well as the new data
myapp/delete-customerand the id in the body
The REST approach sees things differently. A URL should not represent an action, but a thing (called resource in the REST lingo). Since the HTTP protocol (which we are already using) supports verbs, use those verbs to specify what actions to perform on the thing.
So, with the REST approach, customer number 12 would be found on URL
myapp/customers/12. To view the customer data, you hit the URL with a GET request. To delete it, the same URL, with a DELETE verb. To update it, again, the same URL with a POST verb, and the new content in the request body.
For more details about the requirements that a service has to fulfil to be considered truly RESTful, see the Richardson maturity model. The article gives examples, and, more importantly, explains why a (so-called) SOAP service, is a level-0 REST service (although, level-0 means low compliance to this model, it's not offensive, and it is still useful in many cases).
@Ashish Kamble 2019-09-17 10:41:56
What do you mean
RESTis not web service?? Whats
JAX-RSthen??
@blue_note 2019-09-17 10:46:25
@AshishKamble: I provided the link of the rest services specification. The official definition contains only the WS-* protocols (roughly the ones we call "SOAP") and rest is not part of it officially
@blue_note 2019-09-17 10:47:01
@AshishKamble: Also, note that there's also a JAX-WS, which means "web services", differentiated from "rest services". Anyway, the distinction is not important for any practical purposes, as I also noted.
@Bacteria 2015-06-14 19:48:27
SOAP (Simple Object Access Protocol) and REST (Representation State Transfer) both are beautiful in their way. So I am not comparing them. Instead, I am trying to depict the picture, when I preferred to use REST and when SOAP.
What is payload?
Now, for example, I have to send a Telegram and we all know that the cost of the telegram will depend on some words.
So tell me among below mentioned these two messages, which one is cheaper to send?
or
I know your answer will be the second one although both representing the same message second one is cheaper regarding cost.
So I am trying to say that, sending data over the network in JSON format is cheaper than sending it in XML format regarding payload.
Here is the first benefit or advantages of REST over SOAP. SOAP only support XML, but REST supports different format like text, JSON, XML, etc. And we already know, if we use Json then definitely we will be in better place regarding payload.
Now, SOAP supports the only XML, but it also has its advantages.
Really! How?
SOAP relies on XML in three ways Envelope – that defines what is in the message and how to process it.
A set of encoding rules for data types, and finally the layout of the procedure calls and responses gathered.
This envelope is sent via a transport (HTTP/HTTPS), and an RPC (Remote Procedure Call) is executed, and the envelope is returned with information in an XML formatted document.
The important point is that one of the advantages of SOAP is the use of the “generic” transport but REST uses HTTP/HTTPS. SOAP can use almost any transport to send the request but REST cannot. So here we got an advantage of using SOAP.
As I already mentioned in above paragraph “REST uses HTTP/HTTPS”, so go a bit deeper on these words.
When we are talking about REST over HTTP, all security measures applied HTTP are inherited, and this is known as transport level security and it secures messages only while it is inside the wire but once you delivered it on the other side you don’t know how many stages it will have to go through before reaching the real point where the data will be processed. And of course, all those stages could use something different than HTTP.So Rest is not safer completely, right?.
Apart from that, as REST is limited by it's HTTP protocol so it’s transaction support is neither ACID compliant nor can provide two-phase commit across distributed transnational resources.
But SOAP has comprehensive support for both ACID based transaction management for short-lived transactions and compensation based transaction management for long-running transactions. It also supports two-phase commit across distributed resources.
I am not drawing any conclusion, but I will prefer SOAP-based web service while security, transaction, etc. are the main concerns.
Here is the "The Java EE 6 Tutorial" where they have said A RESTful design may be appropriate when the following conditions are met. Have a look.
Hope you enjoyed reading my answer.
@Bhargav Nanekalva 2015-08-23 03:35:35
Great answer but remember REST can use any transport protocol. For example, it can use FTP.
@Osama Aftab 2015-09-07 12:45:34
Who said REST can't use SSL?
@Bacteria 2015-09-07 16:01:23
@ Osama Aftab REST supports SSL, but SOAP supports SSL just like REST additionally it also supports WS-Security.
@GaTechThomas 2016-11-08 18:59:01
To reference the point about size of XML data, when compression is enabled, XML is quite small.
@ThomasRS 2017-04-20 21:51:41
The point about the size of the payload should be deleted, it is such a one-dimensional comparison between JSON and XML and is only possible to detect in seriously optimized setups, which are far between.
@Phil Sturgeon 2018-01-05 00:17:44
A lot of these answers entirely forgot to mention hypermedia controls (HATEOAS) which is completely fundamental to REST. A few others touched on it, but didn't really explain it so well.
This article should explain the difference between the concepts, without getting into the weeds on specific SOAP features.
@cmd 2013-11-09 23:19:50
RESTvs
SOAPis not the right question to ask.
REST, unlike
SOAPis not a protocol.
RESTis an architectural style and a design for network-based software architectures.
RESTconcepts,
DELETE.
@Abdulaziz's question does illuminate the fact that
RESTand
HTTPare often used in tandem. This is primarily due to the simplicity of HTTP and its very natural mapping to RESTful principles.
Fundamental REST Principles
Client-Server Communication
Client-server architectures have a very distinct separation of concerns. All applications built in the RESTful style must also be client-server in principle.
Stateless
Each client request to the server requires that its state be fully represented. The server must be able to completely understand the client request without using any server context or server session state. It follows that all state must be kept on the client.
Cacheable.
See this blog post on REST Design Principles for more details on REST and the above stated bullets.
EDIT: update content based on comments
@Pedro Werneck 2013-11-10 00:51:41
REST does not have a predefined set of operations that are CRUD operations. Mapping HTTP methods to CRUD operations blindly is one of the most common misconceptions around REST. The HTTP methods have very well defined behaviors that have nothing to do with CRUD, and REST isn't coupled to HTTP. You can have a REST API over ftp with nothing but RETR and STOR, for instance.
@Pedro Werneck 2013-11-10 00:53:23
Also, what do you mean by 'REST services are idempotent'? As far as I know, you have some HTTP methods that by default are idempotent, and if a particular operation in your service needs idempotence, you should use them, but it doesn't make sense to say the service is idempotent. The service may have resources with actions that may be effected in an idempotent or non-idempotent fashion.
@Bruce_Wayne 2015-04-16 18:25:43
@cmd :please remove fourth point - "A RESTful architecture may use HTTP or SOAP as the underlying communication protocol". its a misinformation you are conveying.
@Pedro Werneck 2013-11-10 00:45:24
Unfortunately, there are a lot of misinformation and misconceptions around REST. Not only your question and the answer by @cmd reflect those, but most of the questions and answers related to the subject on Stack Overflow..
Pushing things a little and trying to establish a comparison, the main difference between SOAP and REST is the degree of coupling between client and server implementations.. A client is supposed to enter a REST service with zero knowledge of the API, except for the entry point and the media type. In SOAP, the client needs previous knowledge on everything it will be using, or it won't even begin the interaction. Additionally, a REST client can be extended by code-on-demand supplied by the server itself, the classical example being JavaScript code used to drive the interaction with another service on the client-side.
I think these are the crucial points to understand what REST is about, and how it differs from SOAP:
REST is protocol independent. It's not coupled to HTTP. Pretty much like you can follow an ftp link on a website, a REST application can use any protocol for which there is a standardized URI scheme.
REST is not a mapping of CRUD to HTTP methods. Read this answer for a detailed explanation on that.
REST is as standardized as the parts you're using. Security and authentication in HTTP are standardized, so that's what you use when doing REST over HTTP.
REST is not REST without hypermedia and HATEOAS. This means that a client only knows the entry point URI and the resources are supposed to return links the client should follow. Those fancy documentation generators that give URI patterns for everything you can do in a REST API miss the point completely. They are not only documenting something that's supposed to be following the standard, but when you do that, you're coupling the client to one particular moment in the evolution of the API, and any changes on the API have to be documented and applied, or it will break.
REST is the architectural style of the web itself. When you enter Stack Overflow, you know what a User, a Question and an Answer are, you know the media types, and the website provides you with the links to them. A REST API has to do the same. If we designed the web the way people think REST should be done, instead of having a home page with links to Questions and Answers, we'd have a static documentation explaining that in order to view a question, you have to take the URI
stackoverflow.com/questions/<id>, replace id with the Question.id and paste that on your browser. That's nonsense, but that's what many people think REST is.
This last point can't be emphasized enough. If your clients are building URIs from templates in documentation and not getting links in the resource representations, that's not REST. Roy Fielding, the author of REST, made it clear on this blog post: REST APIs must be hypertext-driven.
With the above in mind, you'll realize that while REST might not be restricted to XML, to do it correctly with any other format you'll have to design and standardize some format for your links. Hyperlinks are standard in XML, but not in JSON. There are draft standards for JSON, like HAL.
Finally, REST isn't for everyone, and a proof of that is how most people solve their problems very well with the HTTP APIs they mistakenly called REST and never venture beyond that. REST is hard to do sometimes, especially in the beginning, but it pays over time with easier evolution on the server side, and client's resilience to changes. If you need something done quickly and easily, don't bother about getting REST right. It's probably not what you're looking for. If you need something that will have to stay online for years or even decades, then REST is for you.
@Falco 2014-06-02 11:23:52
Really nice answer :D But I have one question regarding your comparision to the SO-Homepage. How would you implement a Search-Feature in REST? On a homepage you have a search field and the search-word is usually templated into the GET-Part of the URL, or submitted via POST - which is actually templating a user generated string into an URL ?
@Pedro Werneck 2014-06-02 14:40:35
Either one is fine. The issue is how the users get the URLs, not how they use them. They should get the search url from a link in some other document, not from documentation. The documentation may explain how to use the search resource.
@Falco 2014-06-02 15:02:43
So a link with a placeholder in place of the searchterm is fine? Because the searchterm is an input from the user?
@Bhavesh Agarwal 2014-12-04 04:05:16
"people tend to call REST any HTTP API that isn't SOAP". Can you please elaborate this point by giving an example of an API over HTTP, which is not SOAP and not REST either?
@Pedro Werneck 2014-12-04 15:27:48
@BhaveshAgarwal almost every so-called "REST" API you can find around the internet is an example. The StackExchange API itself is an example.
@Pedro Werneck 2015-01-21 16:41:13
@CristiPotlog I never said SOAP is dependent on any particular protocol, I merely emphasize how REST isn't. The second link you sent says REST requires HTTP, which is wrong.
@Orestis 2016-08-11 16:14:30
Lets repeat that once more: HATEOAS is a constraint if you wanna call your API Restful!
@Shadrack B. Orina 2016-08-28 06:14:48
Say I have a soap client and a REST server, can the SOAP client post to the REST server.
@Oleg Sapishchuk 2017-02-27 20:39:46
@PedroWerneck I've read your linked response for "REST is not mapping CRUD to HTTP methods." But I didn't found there any explanations why REST is not mapping CRUD to HTTP methods, only that person who created question, not properly used HTTP method, for his activity. Can you share more information on this topic, please.
@Pedro Werneck 2017-02-27 21:37:48
@OlegSapishchuk HTTP methods have specific semantics, very distinct from CRUD operations.
@Sachin Kainth 2017-03-14 12:20:50
Pedro, I had the exact same query as @OlegSapishchuk. As far as I understand CRUD methods do map quite nicely to HTTP methods. Can you elaborate on this please?
@Pedro Werneck 2017-03-14 20:43:43
@SachinKainth There's an answer for that here. You can map CRUD ops to HTTP methods, but that's not REST, because it's not the intended semantics of those methods as documented in the RFCs.
@Oleg Sapishchuk 2017-03-15 13:18:21
@PedroWerneck the biggest joke here is the fact Google, Twitter and other top companies calling some of their services REST API, while they are not REST, as HATEOS principle was not followed up O_o.
@Hoàng Đăng 2017-05-31 04:25:08
@PedroWerneck as you said "stackoverflow.com/questions/<id>" replace id with the Question.idm that is not REST because it return the whole site, so RESTful only return data in format (json, xml..)
@Pedro Werneck 2017-05-31 23:06:48
@HoàngĐăng Not at all. There's no REST constraint for that. You should return whatever format the client asked in the
Acceptheader, or
406 Not Acceptable.
@Rajan Chauhan 2017-10-28 17:55:57
Last 4 lines are gem and should be fully understood by the person in development. Doing pure rest is time consuming but gives rewards in longer run. So better for medium sized or big sized projects. Not good for prototyping and small projects.
@aod 2017-11-14 07:23:35
What if there is lots of links the response should return so that HATEOAS is satisfied? Is it acceptable having a big response for a small demand? For example, in an online shop, I would like to fetch a good's thumbnail information. However, with this request, details, add to cart, display comments, add comment, similar goods, etc links come also. While I am aware of all these links after first interaction, is it necessary for the server to send them for each request?
@Pedro Werneck 2017-11-14 17:12:13
@aod The goal of REST isn't efficient communication, on the contrary. It trades efficiency for long-term compatibility and evolvability. That's why caching is an important part of REST. If you're serious about HATEOAS, you need to invest some time in setting up cache control headers and orienting clients to use it.
@Rex 2017-03-21 12:47:36
Difference between Rest and Soap
SOAP
REST
For more Details please see here
@Drazen Bjelovuk 2019-02-04 20:03:38
Do 3 and 6 under REST not contradict?
@Rex 2019-02-13 04:04:30
We just compare the feature of each other.
@Quan Nguyen 2016-09-20 08:02:14
Addition for:
++ A mistake that’s often made when approaching REST is to think of it as “web services with URLs”—to think of REST as another remote procedure call (RPC) mechanism, like SOAP, but invoked through plain HTTP URLs and without SOAP’s hefty XML namespaces.
++ On the contrary, REST has little to do with RPC. Whereas RPC is service oriented and focused on actions and verbs, REST is resource oriented, emphasizing the things and nouns that comprise an application.
@marvelTracker 2016-01-17 00:17:05
IMHO you can't compare SOAP and REST where those are two different things.
SOAP is a protocol and REST is a software architectural pattern. There is a lot of misconception in the internet for SOAP vs REST.
SOAP defines XML based message format that web service-enabled applications use to communicate each other over the internet. In order to do that the applications need prior knowledge of the message contract, datatypes, etc..
REST represents the state(as resources) of a server from an URL.It is stateless and clients should not have prior knowledge to interact with server beyond the understanding of hypermedia.
|
https://tutel.me/c/programming/questions/19884295/soap+vs+rest+differences
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
The number of sub-meshes inside the Mesh object.
Each sub-mesh corresponds to a Material in a Renderer, such as MeshRenderer or SkinnedMeshRenderer. A sub-mesh consists of a list of triangles, which refer to a set of vertices. Vertices can be shared between multiple sub-meshes. See Also: GetTriangles, SetTriangles.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Start() { Mesh mesh = GetComponent<MeshFilter>().mesh; Debug.Log("Submeshes: " + mesh.subMeshCount); } }
|
https://docs.unity3d.com/es/2018.2/ScriptReference/Mesh-subMeshCount.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Hide Forgot
Description of problem:
I use a lot of warning flags and I've been seeing gcc crash building firefox and
freetype (but not many other projects I build). Anyway, I have tracked it down
to -Wmissing-include-dirs and a minimal test case. Should be something very silly.
Version-Release number of selected component (if applicable):
cpp-4.1.1-51.fc6
Steps to Reproduce:
1. Create a file named x.c containing one line:
#include "x.h"
2. Run:
cpp -Wmissing-include-dirs -Ixx x.c
Actual results:
cc1: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:> for instructions.
Expected results:
# 1 "x.c"
# 1 "<built-in>"
# 1 "<command line>"
# 1 "x.c"
x.c:1:22: error: x.h: No such file or directory
Sorry, can't reproduce this, with either cpp-4.1.1-51.fc6 on ppc32
or cpp-4.1.1-52.el5.1 on x86_64.
All I get is:
cpp -Wmissing-include-dirs -Ixx x.c
cc1: warning: xx: No such file or directory
# 1 "x.c"
# 1 "<built-in>"
# 1 "<command line>"
# 1 "x.c"
x.c:1:15: error: x.h: No such file or directory
|
https://partner-bugzilla.redhat.com/show_bug.cgi?id=235467
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Will mostly be plotting time Vs value(time) but in certain
> cases will need plots of other data, and therefore have to
> look at the worst case scenario. Not exactly sure what you
> mean by "continuous" since all are descrete data
> points. The data may not be smooth (could have misbehaving
> sensors giving garbage) and jump all over the place.
Bad terminology: for x I meant sorted (monotonic) and for y the ideal
cases is smooth and not varying too rapidly. Try the lod feature and
see if it works for you.
Perhaps it would be better to extend the LOD functionality, so that
you control the extent of subsampling. Eg, suppose you have 100,000 x
data points but only 1000 pixels of display. Then for every data 100
points you could set the decimation factor, perhaps as a percentage.
More generally, we could implement a LOD base class users could supply
their own derived instances to subsample the data how they see fit,
eg, min and max over the 100 points, and so on. By reshaping the
points into a 1000x100 matrix, this could be done in Numeric
efficiently.
>> econdly, the standard gdmodule will iterate over the x, y
>> values in a python loop in gd.py. This is slow for lines with
>> lots of points. I have a patched gdmodule that I can send you
>> (provide platform info) that moves this step to the extension
>> module. Potentially a very big win.
> Yes, that would be great! System info:
Here is the link
You must also upgrade gd to 2.0.22 (alas 2.0.21 is obsolete!) since I
needed the latest version to get this sucker ported to win32.
>> Another possibility: change backends. The GTK backend is
>> significantly faster than GD. If you want to work off line
>> (ie, draw to image only and not display to screen ) and are on
>> a linux box, you can do this with GTK and Xvfb. I'll give you
>> instructions if interested. In the next release of matplotlib,
>> there will be a libart paint backend (cross platform) that may
>> be faster than GD. I'm working on an Agg backend that should
>> be considerably faster than all the other backends since it
>> does everything in extension code -- we'll see
> Yes I am only planning to work offline. Want to be able to
> pipe the output images to stdout. I am looking for the
> fastest solution possible.
I don't know how to write a GTK pixbuf to stdout. I inquired on the
pygtk mailing list, so perhaps we'll learn something soon. To use GTK
in Xvfb, make sure you have Xvfb (X virtual frame buffer) installed
(/usr/X11R6/bin/Xvfb). There is probably an RPM, but I don't
remember.
You then need to start it with something like
XVFB_HOME=/usr/X11R6
$XVFB_HOME/bin/Xvfb :1 -co $XVFB_HOME/lib/X11/rgb -fp $XVFB_HOME/lib/X11/fonts/misc/,$XVFB_HOME/lib/X11/fonts/Speedo/,$XVFB_HOME/lib/X11/fonts/Type1/,$XVFB_HOME/lib/X11/fonts/75dpi/,$XVFB_HOME/lib/X11/fonts/100dpi/ &
And connect your display to it
setenv DISPLAY :1
Now you can use gtk as follows
from matplotlib.matlab import *
from matplotlib.backends.backend_gtk import show_xvfb
def f(t):
s1 = cos(2*pi*t)
e1 = exp(-t)
return multiply(s1,e1)
t1 = arange(0.0, 5.0, 0.1)
t2 = arange(0.0, 5.0, 0.02)
t3 = arange(0.0, 2.0, 0.01)
subplot(211)
plot(t1, f(t1), 'bo', t2, f(t2), 'k')
title('A tale of 2 subplots')
ylabel('Damped oscillation')
subplot(212)
plot(t3, cos(2*pi*t3), 'r--')
xlabel('time (s)')
ylabel('Undamped')
savefig('subplot_demo')
show_xvfb() # not show!
|
https://discourse.matplotlib.org/t/large-data-sets-and-performance/287
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Simulating Railroad Crossing Lights
Everyone has seen a railway crossing before, and if you're a railfan you've probably spent more than a few hours stuck behind them waiting for their infernal blink-blink-blink to stop so you can continue chasing your train!
How do you make your model crossing blink like that though? The simple answer would be a 555 timer in astable mode with some set and reset triggers. But that would be easy, and when you're an software engineer everything looks like a software problem. So instead, we attack the problem with a sledgehammer and use an Arduino.
Kidding aside, there are very valid reasons why you might want to use an Arduino for such a simple problem. Suppose you're using the excellent Arduino CMRI library to connect your layout to JMRI, and you have some infra red train detectors (coming soon from the Utrainia Electrik Company) wired up. It would then be very easy to get JMRI to set an output bit whenever a train is detected near the crossing; then in your Arduino you can flash the lights as appropriate.
So to turn this into a practical example, I decided to write a small library to achieve just this. Enter: CrossingFlasher! This small Arduino library exposes just three small methods to let you start, stop, and update your crossing lights. It handles all the timing for you, and flashes them at the correct 3Hz flashing rate that we use here in NZ.
And how would one use this on in Arduino C/MRI?
#include <CMRI.h> #include <CrossingFlasher.h> CMRI cmri; CrossingFlasher bucks(2, 3); // crossbucks on pins 2 and 3 void setup() { Serial.begin(9600); } void loop() { // 1: process incoming CMRI data cmri.process(); // 2: update output. Reads bit 0 of T packet and sets the LED to this if (cmri.get_bit(0) == true) bucks.on(); else bucks.off(); // 3: update cross bucks bucks.update(); }
Pretty simple huh? We've just wired up a set of crossbucks to pins 2 and 3 on our Arduino, and told them to flash whenever we set output bit 0 in JMRI (System Name: ML1). Using some Logix rules in JMRI it would be easy to trigger this from a block occupancy detector, a push button, or a broken-beam type detector. Piece of cake. A bit more work and you could even play the crossing sounds through JMRI.
If you need to run more than one pair of crossbucks, just connect multiple LEDs to each Arduino output pin.
|
http://www.utrainia.com/46-simulating-railroad-crossing-lights
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Introduction: Automatic Light Switch
You’ve tried everything. You’ve tried throwing socks, you’ve tried reaching extra far, you’ve even tried activating your latent psychic powers, yet the light switch refuses to budge. Of course, you could just get up off your bed and walk all the way across the room, but that would be too much effort for a feeble college student such as yourself. No, what we need right now is a convoluted contraption to turn the switch off from across the room, and with a little bit of coding, you can finally turn that switch on and off from your bed.
What we have come up with here is a mechanical switch that is easy to put together, easily removed, and does not require any modification of the wiring of the actual light switch, which makes it perfect for a college dorm room. Additionally, it can be made partially from parts found on eBay, and parts found in a typical dorm room. This means it does not require any additional tools such as a 3D printer or laser cutter, and therefore requires even less effort on your part! Although this may seem like it requires actual time and effort to put the whole contraption together, this will totally be totally worth it when with the push of a button or the flick of a switch, you can turn your lights off from across the room.
Materials
We were able to rent all of the electronic parts we needed for free from our university, but if you were to buy the exact parts we used in order to replicate this design, it would be quite expensive. There are much cheaper alternatives available for the cost-conscious college student. The prices listed are the average prices found after some online research.
Electronic Parts:
Arduino
We used the Arduino Uno, which can be bought online for around $25. While there are cheaper Arduino boards available, such as the Pro mini for $10, the Uno is recommended as the best board for beginners with electronics and coding. If you decide to try a different board, keep in mind that they will likely have a different programming interface, instead of the standard USB connection.
Servo
We used the Hitec HS-645MG Servo which costs about $30. There are a wide range of servos available at a much cheaper price that will do the same job as the one we used. A couple of good websites to browse servos are hobbyking and servocity. You can find a similarly sized servo from Hitec for $8 (HS-311). The third picture above depicts a servo motor from four angles so you can see approximately what the motor you get should look like.
Motor Shield (Not required, Cost: $20)
The Motor Shield made it easy for us to safely connect the servo to the Arduino without any extra wiring. The Arduino Uno has a 5V regulator, so you can connect the servo directly to the Arduino as long as 5V is within the servo's operating range. If you choose not to buy the motor shield, you will need some jumper wires which can be bought in large packs for a couple of dollars. You can find many guides on how to connect a servo to an arduino with a quick google search.
Long USB cable ($4) and USB Type A to Type B adapter ($2)
This is what will be used to program and control the Arduino from your computer. The USB cable should be long enough to reach from your bed to the light switch.
Other Materials:
- Any kind of tape
- Two-sided adhesive stickers
- A popsicle stick
- A computer
A rough estimate of the cost for this project would be $40. This includes the Arduino Uno because it is the easiest to work with, but does not include the motor shield. The cost for the other materials was not included because many students likely already have these materials around their dorm. While the cost for this project may seem high, keep in mind that you can re-purpose all of the electronic parts for use in future projects once you no longer need them for this light switching device. Cost was a factor in every part of this design, as we wanted to make a DIY that was accessible to many college students.
Step 1: Downloading the Arduino Program and Library to Your Computer
The first thing you must do is download the Arduino program to your computer from here. The program is completely free as well as the code, which is why we chose to use the Arduino. It is very easy to learn how to use as there are many online resources available.
Secondly, download the servo library from here. This library has to be put in a very specific place so that the Arduino program can locate it on your computer.
- On an Apple or Windows computer
- Go to Documents > Arduino > Libraries
- If there is no Libraries folder, Right click > New Folder > Name folder “Libraries”
- Libraries > Place the unzipped downloaded file in here, and rename it “Adafruit_MotorShield”
- Go to /home/
- Sketchbook > Libraries
- If there is no Libraries folder, right click > New Folder > Name Folder "Libraries"
- Libraries > Place the unzipped downloaded file in here, and rename it “Adafruit_MotorShield”
Step 2: Assembling and Programming Your Arduino
Assembling the brains of the project, the Arduino and motor, is the first and most crucial step. As shown above, attach the motor shield to the Arduino. This will arrange all of the circuitry for you, in terms of powering your motor. Next, attach your servo motor to the Servo 2 pins on the corner of your motor shield, as shown in the second picture.
Now, plug your Arduino into your computer using the cable provided. Next up is the coding aspect of this project, but don't fear, we've figured all the code out for you! (It's written in C++ if you are curious.) Before you can begin to code,you will be required to upload a program to your arduino that will cause the motor to turn a certain number of degrees every time you enter “1” into the console. Copy and paste this code into the Arduino window:
#include <AFMotor.h>
#include<Servo.h>
Servo light; int angle = 0; int onoff = 0; void setup() { Serial.begin (9600); Serial.println ("Enter 1 to toggle on/off"); light.attach (9); delay (1000); }
void loop() { if (angle <= 0) { angle = Serial.read(); } else { angle=0; if(onoff%2==0){ light.write (0); Serial.println ("off"); onoff++; } else { light.write (35); // this 35 value represents the angle the motor will turn, so you can change this Serial.println ("on"); onoff--; } } }
Step 3: Assembling Your Light Switcher
Now that you have finished setting up the Arduino, you may be relieved to find out that setting up this device is pretty simple! You have created the code to make the motor spin, all you need to do is make something that will hit your light switch. In order to do this, you simply need to extend the linear motor attachment you have been given, by taping half a popsicle stick to it. This ensures that the motor attachment will be able to push the switch in. Make sure you leave the hole in the middle of the motor attachment free of any tape so you can connect this piece to the servo. Connect the motor attachment to the servo motor as shown.
Step 4: Attaching Your Set-up to the Wall
For the push switch, you now need to attach both the servo motor and arduino board to the wall. We used these sticky strips to put each piece of equipment on the wall. There are several kinds of adhesive that you can use (two varieties are shown in the top picture). It doesn’t really matter where the arduino goes, as long as it’s close enough for the cord connecting it to the servo motor to reach. The servo, however, must be attached right next to the light switch so that the motor attachment lies directly overtop of the switch. In the bottom picture, we showed where we placed our sticky strips with respect to the light switch.
Step 5: Putting It All Together
Before you put the servo motor on the wall, make sure to run the code through it once, ensuring that your device has been set up properly and the popsicle stick is in the correct orientation. Finally, stick both your arduino and servo motor to the wall. One trick that helped us one was to put two little erasers in the space between the servo and the wall, to help counteract the force that the light switch exerts on the motor when it is being pressed down by the motor attachment. Any small object lying around your room will work here.
Next, run your USB extender cord all the way around your room. In order to avoid the cord causing any problems, you could use hooks and two-sided adhesive, to run the cord along your walls to the switch, or you could run the cord along the floor around the outside of your room. Keeping your computer close to your bed, you can now control the device using the code for the Arduino to rotate the motor, turning the lights on and off. Now, anytime you want to turn the lights off from your bed, simply send the code from your computer and voilà!
Step 6: Further Design Options
Depending on the setup of your dorm room and how comfortable you are with programming and technology, there are other options for how you can control the light switch from your bed.
One option is using a smartphone app called “Blynk” which controls the servo motor through your phone. Having and app on your phone means that you are not restricted to using the light switching device from your bed. As long as your phone is nearby and you are within range of the Arduino, you will be able to control the lights in your room. This option can be extremely convenient although it can take some time to set up, especially if you are unfamiliar with coding or with how Blynk works. Using Blynk requires an ethernet shield (similar in shape to the motor shield) to be plugged into the Arduino, allowing it to connect to the internet. This ethernet shield can cost somewhere around $20, so we decided against choosing this as our main option as it is less accessible. However, using Blynk eliminates the need for having wires running throughout your room, something that we think can be well worth the extra time it takes to set up the app on your phone. To guide you through this somewhat complicated task, there are many tutorials online, explaining how to use Blynk. A good one to follow is which will walk you through all the steps.
Another way to modify this device is to repurpose a remote to control the Arduino and turn the motor. One of the advantages of this method is that it is less expensive and easier to set up than the app. Although using this method would also require programming and wiring to ensure the remote is connected to the Arduino properly. This method makes it easy to control the light switch from anywhere in the room by pushing the buttons on the remote, however, it requires extra parts for your Arduino such as the Raspberry Pi Infrared Remote Control Ir Receiver Module DIY Kit, which can be found on Amazon for about $9. It also requires a bit more circuitry, but again, since the Arduino is so commonly used, there are plenty of other tutorials to be found online.
We hope this tutorial was helpful, and that you learned a little bit from it as well. Although setting up this device can take a little bit of time and money, just think of how nice it will be to turn off the lights from the comfort of your own bed. Happy light switching!
Recommendations
We have a be nice policy.
Please be positive and constructive.
6 Comments
Sorry to say this: But this instructable shows the complete lack of electronic knowledge. Putting a servo on top of a mechanical light switch? It looks ugly and can be done much easier: with pure electronics! If you use a bistable relay you can switch the light on and off with the light switch of the room AND you can switch the light on and off by Arduino. Without needing any servo. Have a look at the last chapter of this instructable:
what if you don't want to cut up your wall?
Cut up the wall? This is not necessary. You just have to add a cable to the switch. You can let it come out under the cover of the light switch.
can i use a sg90 servo motor instead?
Nice !
I was working exactly on the same design !
Using esp8266 for small node footprint
Nice design. With enough space between the wall and the servo, you can have a gap between the rotor and the switch that lets you still use it manually. Clever.
|
http://www.instructables.com/id/Automatic-Light-Switch-2/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
..._ARCHarchitecture macros
BOOST_COMPcompiler macros
BOOST_LANGlanguage standards macros
BOOST_LIBlibrary macros
BOOST_OSoperating system macros
BOOST_PLATplatform macros
BOOST_HWhardware macros
This library defines a set of compiler, architecture, operating system, library, and other version numbers from the information it can gather of C, C++, Objective C, and Objective C++ predefined macros or those defined in generally available headers. The idea for this library grew out of a proposal to extend the Boost Config library to provide more, and consistent, information than the feature definitions it supports. What follows is an edited version of that brief proposal.
The idea is to define a set of macros to identify compilers and consistently represent their version. This includes:
#if/
#elifdirectives, for each of the supported compilers. All macros would be defined, regardless of the compiler. The one macro corresponding to the compiler being used would be defined, in terms of BOOST_VERSION_NUMBER, to carry the exact compiler version. All other macros would expand to an expression evaluating to false (for instance, the token 0) to indicate that the corresponding compiler is not present.
The current Predef library is now, both an independent library, and expanded in scope. It includes detection and definition of architectures, compilers, languages, libraries, operating systems, and endianness. The key benefits are:
#ifdef.
#ifdefchecks.
#include <boost/predef.h>so that it's friendly to precompiled header usage.
#include <boost/predef/os/windows.h>for single checks.
An important design choice concerns how to represent compiler versions by means
of a single integer number suitable for use in preprocessing directives. Let's
do some calculation. The "basic" signed type for preprocessing constant-expressions
is long in C90 (and C++, as of 2006) and intmax_t in C99. The type long shall
at least be able to represent the number
+2 147 483 647.
This means the most significant digit can only be 0, 1 or 2; and if we want
all decimal digits to be able to vary between 0 and 9, the largest range we
can consider is
[0, 999 999 999]. Distributing evenly, this
means 3 decimal digits for each version number part.
So we can:
It appears relatively safe to go for the first option and set it at 2/2/5. That covers CodeWarrior and others, which are up to and past 10 for the major number. Some compilers use the build number in lieu of the patch one; five digits (which is already reached by VC++ 8) seems a reasonable limit even in this case.
It might reassure the reader that this decision is actually encoded in one
place in the code; the definition of
BOOST_VERSION_NUMBER.
Even though the basics of this library are done, there is much work that can be done:
BOOST_WORKAROUNDmacro would benefit from a more readable syntax. As would the
BOOST_TESTED_ATdetail macro.
|
https://www.boost.org/doc/libs/1_64_0/doc/html/predef.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Here we will learn passing parameters by reference or call by reference in c# or ref parameter in c# with examples and how to use c# pass by reference or call by reference to pass a value type parameters by reference with examples.
In c#, passing a value type parameter to a method by reference means passing a reference of the variable to the method. So the changes made to the parameter inside of called method will have an effect on the original data stored in the argument variable.
By using ref keyword, we can pass a parameters reference-type and it’s mandatory to initialize the variable value before we pass it as an argument to the method in c# programming language.
As discussed earlier, value-type variables will contain the value directly on it memory and reference-type variables will contain a reference of its data.
Following is simple example of passing parameters by reference in c# programming language.
int x = 10; // Variable need to be initialized
Multiplication(ref x);
If you observe above declaration, we declared and assigned a value to the variable x before we pass it as an argument to the method by using reference (ref).
To use ref parameter in c# application, both the method definition and the calling method must explicitly use the ref keyword.
Following is the example of passing a value type parameter to a method by reference in c# programming language.
using System;
namespace Tutlane
{
class Program
{
static void Main(string[] args)
{
int x = 10;
Console.WriteLine("Variable Value Before Calling the Method: {0}", x);
Multiplication(ref x);
Console.WriteLine("Variable Value After Calling the Method: {0}", x);
Console.WriteLine("Press Any Key to Exit..");
Console.ReadLine();
}
public static void Multiplication(ref int a)
{
a *= a;
Console.WriteLine("Variable Value Inside the Method: {0}", a);
}
}
}
If you observe above example, we are passing the reference of variable x to the variable a in Multiplication method by using ref keyword. In this case, the variable a contains the reference of variable x so the changes that made to the variable a will affect the value of variable x.
When we execute above c# program, we will get the result like as shown below.
If you observe above result, the changes whatever we did for variable in called method has reflected in calling method also.
This is how we can pass parameters to the method by reference using ref keyword in c# programming language based on our requirements.
|
https://www.tutlane.com/tutorial/csharp/csharp-pass-by-reference-ref-with-examples
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Available with Spatial Analyst license.
Available with 3D Analyst license.
Summary
Reclassifies (or changes) the values in a raster.
Usage
The input raster must have valid statistics. If the statistics do not exist, they can be created using the Calculate Statistics tool in the Data Management Tools toolbox..
From the tool,
Reclassify (in_raster, reclass_field, remap, {missing_values})
Return Value
Code sample
The following examples show several ways of reclassifying a raster.
import arcpy from arcpy import env from arcpy.sa import * env.workspace = "C:/sapyexamples/data" outReclass1 = Reclassify("landuse", "Value", RemapValue([[1,9],[2,8],[3,1],[4,6],[5,3],[6,3],[7,1]])) outReclass1.save("C:/sapyexamples/output/landuse_rcls") outReclass2 = Reclassify("slope_grd", "Value", RemapRange([[0,10,"NODATA"],[10,20,1],[20,30,2], [30,40,3],[40,50,4],[50,60,5],[60,75,6]])) outReclass2.save("C:/sapyexamples/output/slope_rcls") outReclass3 = Reclassify("pop_density", "Value", RemapRange([[10,10,1],[10,20,2],[20,25,3], [25,50,4],[50,]]), "NODATA") outReclass3.save("C:/sapyexamples/output/popden_rcls")
This example reclassifies the input raster based on the values in a string field.
# Name: reclassify_example02.py # Description: Reclassifies the values in a raster. # Requirements: Spatial Analyst Extension # Import system modules import arcpy from arcpy import env from arcpy.sa import * # Set environment settings env.workspace = "C:/sapyexamples/data" # Set local variables inRaster = "landuse" reclassField = "LANDUSE" remap = RemapValue([["Brush/transitional", 0], ["Water", 1],["Barren land", 2]]) # Execute Reclassify outReclassify = Reclassify(inRaster, reclassField, remap, "NODATA") # Save the output outReclassify.save("C:/sapyexamples/output/outreclass02")
Environments
Licensing information
- ArcGIS Desktop Basic: Requires Spatial Analyst or 3D Analyst
- ArcGIS Desktop Standard: Requires Spatial Analyst or 3D Analyst
- ArcGIS Desktop Advanced: Requires Spatial Analyst or 3D Analyst
|
http://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/reclassify.htm
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
I defined class A which has a method like this.
def func(self):
while True:
threading.Timer(0,self.func2, ["0"]).start()
time.sleep(nseconds)
threading.Timer(0,self.func2, ["1"]).start()
time.sleep(nseconds)
If I define an instance of a this class in another script and run
func method of this instance, how can I break while loop and stop these threads correctly? Do I need a ctrl-c signal handler in class A, if yes how? Note: I am also calling a system call by
os.system function in
func2 method of class A. Problem is when I run main script file and try to stop running of these threads, they do not stop.
There are myriads of ways to achieve what you want, one of the most straightforward ones would be using Events
from threading import Event class Foo(object): def __init__(self): # the stop event is initially set to false, use .set() to set it true self.stop_event = Event() def func(self): while not self.stop_event.is_set(): # your code
Meanwhile in some other thread (assuming the object your're talking about is
obj):
obj.stop_event.set()
to finish the loop in the next iteration.
|
http://www.dlxedu.com/askdetail/3/2a7b30acbd0e0069c0d183381bb04909.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Authentication and Authorization for SignalR Hubs
by Patrick Fletcher, Tom FitzMacken
This topic describes how to restrict which users or roles can access hub methods. topic contains the following sections:
-
Require authentication for all hubs
-
Pass authentication information to clients
Authentication options for .NET clients
Authorize attribute
SignalR provides the Authorize attribute to specify which users or roles have access to a hub or method. This attribute is located in the
Microsoft.AspNet.SignalR namespace. You apply the
Authorize attribute to either a hub or particular methods in a hub. When you apply the
Authorize attribute to a hub class, the specified authorization requirement is applied to all of the methods in the hub. This topic provides examples of the different types of authorization requirements that you can apply. Without the
Authorize attribute, a connected client can access any public method on the hub.
If you have defined a role named "Admin" in your web application, you could specify that only users in that role can access a hub with the following code.
[Authorize(Roles = "Admin")] public class AdminAuthHub : Hub { }
Or, you can specify that a hub contains one method that is available to all users, and a second method that is only available to authenticated users, as shown below.
public class SampleHub : Hub { public void UnrestrictedSend(string message){ . . . } [Authorize] public void AuthenticatedSend(string message){ . . . } }
The following examples address different authorization scenarios:
[Authorize]– only authenticated users
[Authorize(Roles = "Admin,Manager")]– only authenticated users in the specified roles
[Authorize(Users = "user1,user2")]– only authenticated users with the specified user names
[Authorize(RequireOutgoing=false)]– only authenticated users can invoke the hub, but calls from the server back to clients are not limited by authorization, such as, when only certain users can send a message but all others can receive the message. The RequireOutgoing property can only be applied to the entire hub, not on individuals methods within the hub. When RequireOutgoing is not set to false, only users that meet the authorization requirement are called from the server.
Require authentication for all hubs
You can require authentication for all hubs and hub methods in your application by calling the RequireAuthentication method when the application starts. You might use this method when you have multiple hubs and want to enforce an authentication requirement for all of them. With this method, you cannot specify requirements for role, user, or outgoing authorization. You can only specify that access to the hub methods is restricted to authenticated users. However, you can still apply the Authorize attribute to hubs or methods to specify additional requirements. Any requirement you specify in an attribute is added to the basic requirement of authentication.
The following example shows a Startup file which restricts all hub methods to authenticated users.
public partial class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR(); GlobalHost.HubPipeline.RequireAuthentication(); } }
If you call the
RequireAuthentication() method after a SignalR request has been processed, SignalR will throw a
InvalidOperationException exception. SignalR throws this exception because you cannot add a module to the HubPipeline after the pipeline has been invoked. The previous example shows calling the
RequireAuthentication method in the
Configuration method which is executed one time prior to handling the first request.
Customized authorization
If you need to customize how authorization is determined, you can create a class that derives from
AuthorizeAttribute and override the UserAuthorized method. For each request, SignalR invokes this method to determine whether the user is authorized to complete the request. In the overridden method, you provide the necessary logic for your authorization scenario. The following example shows how to enforce authorization through claims-based identity.
[AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = false)] public class AuthorizeClaimsAttribute : AuthorizeAttribute { protected override bool UserAuthorized(System.Security.Principal.IPrincipal user) { if (user == null) { throw new ArgumentNullException("user"); } var principal = user as ClaimsPrincipal; if (principal != null) { Claim authenticated = principal.FindFirst(ClaimTypes.Authentication); if (authenticated != null && authenticated.Value == "true") { return true; } else { return false; } } else { return false; } } }
Pass authentication information to clients
You may need to use authentication information in the code that runs on the client. You pass the required information when calling the methods on the client. For example, a chat application method could pass as a parameter the user name of the person posting a message, as shown below.
public Task SendChatMessage(string message) { string name; var user = Context.User; if (user.Identity.IsAuthenticated) { name = user.Identity.Name; } else { name = "anonymous"; } return Clients.All.addMessageToPage(name, message); }
Or, you can create an object to represent the authentication information and pass that object as a parameter, as shown below.
public class SampleHub : Hub { public override Task OnConnected() { return Clients.All.joined(GetAuthInfo()); } protected object GetAuthInfo() { var user = Context.User; return new { IsAuthenticated = user.Identity.IsAuthenticated, IsAdmin = user.IsInRole("Admin"), UserName = user.Identity.Name }; } }
You should never pass one client's connection id to other clients, as a malicious user could use it to mimic a request from that client.
Authentication options for .NET clients
When you have a .NET client, such as a console app, which interacts with a hub that is limited to authenticated users, you can pass the authentication credentials in a cookie, the connection header, or a certificate. The examples in this section show how to use those different methods for authenticating a user. They are not fully-functional SignalR apps. For more information about .NET clients with SignalR, see Hubs API Guide - .NET Client.
Cookie
When your .NET client interacts with a hub that uses ASP.NET Forms Authentication, you will need to manually set the authentication cookie on the connection. You add the cookie to the
CookieContainer property on the HubConnection object. The following example shows a console app that retrieves an authentication cookie from a web page and adds that cookie to the connection.
class Program { static void Main(string[] args) { var connection = new HubConnection(""); Cookie returnedCookie; Console.Write("Enter user name: "); string username = Console.ReadLine(); Console.Write("Enter password: "); string password = Console.ReadLine(); var authResult = AuthenticateUser(username, password, out returnedCookie); if (authResult) { connection.CookieContainer = new CookieContainer(); connection.CookieContainer.Add(returnedCookie); Console.WriteLine("Welcome " + username); } else { Console.WriteLine("Login failed"); } } private static bool AuthenticateUser(string user, string password, out Cookie authCookie) { var request = WebRequest.Create("") as HttpWebRequest; request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.CookieContainer = new CookieContainer(); var authCredentials = "UserName=" + user + "&Password=" + password; byte[] bytes = System.Text.Encoding.UTF8.GetBytes(authCredentials); request.ContentLength = bytes.Length; using (var requestStream = request.GetRequestStream()) { requestStream.Write(bytes, 0, bytes.Length); } using (var response = request.GetResponse() as HttpWebResponse) { authCookie = response.Cookies[FormsAuthentication.FormsCookieName]; } if (authCookie != null) { return true; } else { return false; } } }
The console app posts the credentials to which could refer to an empty page that contains the following code-behind file.
using System; using System.Web.Security; namespace SignalRWithConsoleChat { public partial class RemoteLogin : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string username = Request["UserName"]; string password = Request["Password"]; bool result = Membership.ValidateUser(username, password); if (result) { FormsAuthentication.SetAuthCookie(username, false); } } } }
Windows authentication
When using Windows authentication, you can pass the current user's credentials by using the DefaultCredentials property. You set the credentials for the connection to the value of the DefaultCredentials.
class Program { static void Main(string[] args) { var connection = new HubConnection(""); connection.Credentials = CredentialCache.DefaultCredentials; connection.Start().Wait(); } }
Connection header
If your application is not using cookies, you can pass user information in the connection header. For example, you can pass a token in the connection header.
class Program { static void Main(string[] args) { var connection = new HubConnection(""); connection.Headers.Add("myauthtoken", /* token data */); connection.Start().Wait(); } }
Then, in the hub, you would verify the user's token.
Certificate
You can pass a client certificate to verify the user. You add the certificate when creating the connection. The following example shows only how to add a client certificate to the connection; it does not show the full console app. It uses the X509Certificate class which provides several different ways to create the certificate.
class Program { static void Main(string[] args) { var connection = new HubConnection(""); connection.AddClientCertificate(X509Certificate.CreateFromCertFile("MyCert.cer")); connection.Start().Wait(); } }
|
https://docs.microsoft.com/en-us/aspnet/signalr/overview/security/hub-authorization
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Read John Resig's blog post for more details:
Read John Resig's blog post for more details:
You could do exactly the same using very simple Knockout. It wouldn't be as funky, but it would be something you'd be happy to use in production.
This language conveys so much more competence than the standard 'seeking unix ninja rockstar' stuff that seems to be de rigeur these days.
Phrases like: "You must have experience designing and building large and complex (yet maintainable) systems" are so vague and ambiguous that if I honestly saw this post from some guy named "Bezos" in '94, I would have written off as a jokester.
At least in '94 they haven't started using the word "disruptive" as if it's something you can do to a whole industry overnight. Thank goodness.
It's grep, just better. It highlights the selected text, it shows which files, and in what line the text was found (and uses vivid colors so you can distinguish them easily), ignores .git and .hg directories (among others, that shouldn't be searched) by default, you can tell it to search, for example for only `--cpp` or `--objc` or `--ruby` or `--text` files (with a flag, not a filename pattern), and many many other neat features that I'm sure grep has, but you have to remember and memorize them. ack has sensible defaults.
Why ack?
manpage:
Oh, and ack is written in perl and doesn't require admin privileges to install.
brew install
Someone commented on the article that this might be caused by missing off the -F flag; I tried this, and -F makes both versions slightly faster again.
My only concern is what about search engine visibility? If I were to build an app like this with Meteor would Google see the page content?
I think there are lot's of improvements that could be made to sites like Reddit/HN. But thanks a lot for open sourcing it this will be very helpful to me in learning meteor!
edit: Just noticed you are planning on writing a book. Can't wait!
P.S. I'm a subscriber to your newsletter -- great stuff! Keep it up.
Congrats, I knew you'd make the front page, Sacha! And so good to know it's holding up, that changes my plans considerably, I'll plod on full steam on top of Telescope and Meteor.
For context, I forked this app for an MVP[1] showing Meteor's own roadmap, up for vote, in HN-clone format, which went live only a week ago[2].
[1]
[2]
This means it's just a "source available" app but normal copyright applies. I think this can be a bit misleading but nonetheless congratulations!
I've been skeptical of Meteor, but this speaks well for it.
Still concerned about the security/data leakage/authentication methodology though.
Thanks for posting this, awesome to see examples of polished Meteor projects!
Update with link to demo, looks very nice:
Aside from that, I don't see the utility of being able to see the stories change order, or comments come in. Especially once you have large numbers of comments and the stories reorder constantly.
I just don't see what is so interesting about Meteor - and I love node.js and JavaScript.
It is incredibly frustrating that in order to be able to find an email I received years ago I have to figure out exactly how someone might have written a certain term in that mail. And I cannot see any excuse for not offering that feature; limit me to a few substring searches a day if resources are an issue and I don't expect fully-indexed lightning-fast results, a simple "grep" so to speak is just fine...but please let me search my mails properly!
In related news nytimes.com used to have a similar feature where the definition of words would pop up when you selected them. It basically caused me to stop reading their site.
I guess RMS has a point.
I honestly think HN should be doing more about linkbait like this.
"I haven't heard of this feature" != "no one knows about this feature".
Here is the email feature I want. If I paste a URL that looks like a post/article into a new message, I want the slug automagically split, title cased and copied into the subject line.
For example:
The Greatest Google Mail Feature You
Similarly, BufferApp post with the selected text instead of the title.
This is indeed a good UX feature and people should use this where it make sense - select text and put it in context with the next action.
[1]
[2]
Off topic, but a similar feature exists in Pinboard (). You can select some text on the page before clicking bookmark, and that gets set as the description of the page in the bookmark. It's a pretty handy feature if the page title is not enough to describe what the page is about.
We're not talking about false positives, these are emails that stay in my inbox for days before being moved to the spam folder. Which basically means I need to check my spam folder every day. Trust forever lost.
The greatest feature about Gmail that not enough use is 2-factor auth (even though it is not limited to Gmail- other web-based mail services provide it); it is a pain in the arse, but after you get hacked once or twice, you'll be happy you did it. Popular Saas apps are prime targets for being hacked. It may mean they are safer, but they are also riskier to use. If you're not using 2-factor auth, you should probably not use Gmail, unless a hacker taking control of your account wouldn't bother you or your contacts.
But I'm looking at the on/off radio button in labs in another tab right now.
most of my email flows like a conversation, i don't need to bring back older parts if my recipient already has them in front of him.
Amazon attempts to ease the pain by offering "interface parity" with the Google Maps API, but there are significant functional differences.
We are going to see more and more examples of this where mobile platform vendors are going to try to get developers to use their firm's web services when running on their platform. Bummer for devs who are already struggling with trying to target multiple platforms.
[1]...
demo looks like a demo. basically it's because they're not part of the android group. do they have the google maps api libraries on the device? just curious i honestly don't know. I'd imagine there are licensing issues?
The advantage they have is a female plug, you can't physically break it like standard usb key (most of these keys are broken after a year).
But they're disappearing too, people tend to paint walls :)
Also, there is a sizable amount of dependencies for this library, all due to the use of flot. Since most of the code doesn't use any of the dependencies, I wonder if they would consider releasing any future versions as two parts, jstat.js (containing the number crunching methods) and jstat-flot.js (containing the plotting wrapper methods).
I almost thought they weren't on GitHub.
P.S. Functionn contains a whole lot more of awesome resources like jStat. There only a fraction of them I can post here at a time. Take a look if you're interested, and subscribe:
Sold. Given the amount of prototyping I do on my workstation requiring constant review of previous revisions for collaboration, this is something that lets me get a link sent and stays as far out of the way of my workflow as possible-and it supports an application my team already uses? Yep. Sold.
I might fork this so I can add the option to define which directory in /Public, since I'm OCD like that (having stray files kills me). Thanks for this!
That is why I use a hacked version of Gyazo (). I modified the script so that moves the file to dropbox instead of their monstrosity of a share page.
Gyazo is the best because you launch the app, are presented with a standard screenshot UI, and then the app closes and your URL is copied and opened in a browser.
Here is my modified gyazo script if you want to try it:
I gave it a whirl:
1. Snappy, which is nice, since PyCharm can be sluggish on my Mac
2. No VCS integration
3. By default very strict code checking is turned on, which turns my (functional) code into a sea of underlines, which is not so pretty
It looks to be an interesting start, but it will need VCS integration before it looks suitable as a PyCharm replacement.
I didn't look in detail at code completion/code assist, which PyCharm does very well.
(Seriously, check it out - KDevelop's Python plugin and Microsoft's PTVS are currently the two projects doing serious work on static analysis of Python for live editing purposes. Here's a nice subthread comparing the two:)
* Scrolling is way too slow. This isn't nitpicking, this is really very important to me
* I like PEP8 warnings and use them in other editors, but I don't like not being able to pick which style stuff I care about
* I don't like the PEP8 tooltips. They cover up my code and that's the worst possible place to put them. Even if I do plan to "fix" the issue, coming up over the code that I'm typing right now is never okay.
* It's really quite a lot of work through some confusing terminology to get a test run of the IDE going on an existing project. I don't want to move my code into your workspace. I don't want to import my existing project (that sounds scary)
* Some glaring bugs seem to indicate that this is more young than is indicated on the very flashy project site. For instance, if I try to import a project but cancel the "select a directory" popup, I inconsistently get it either removing my previous selection or crashing the whole IDE
Also, changing the margin line doesn't seem to take effect unless you quit and restart the IDE.
Thank you.
From a usability perspective, your download button could be better. It doesn't download right away (which is fine), but redirects to downloads/win for me. Might be nice to have it auto-scroll to the win downloads since it took me a while to figure out what was going on.
Here's a screenshot from Win7 32-bit:
That random pink line makes it unusable for me.
I hope we can find the time to take care of some of the stuff mentioned here as videos, screenshots, user guide, etc.
It's a lot of work, but we are proud of what we can achieve with a free software project.
Thx everyone!
All in all it looks very nice, thanks for sharing.
I've been looking forever for a text editor that does this and surprisingly few do.
Looks good though. I thought it was going to be YET ANOTHER ECLIPSE distribution, but apparently it's not. It seems to be pretty fast. Hope they fix the crashing issue on Lion soon.
Also, I think would be nice if there was a way to interact with the console after running a script. I realize this may be sort of an odd request, but it is very convenient when you're not quite sure on how you want to solve a problem, and you need to try out some solutions interactively. I greatly enjoy this in spyder, my current python ide of choice.
Go on
Yeah! Some jerk who runs my MTA set the size of acceptable attachments really low! I wonder who did that...
$ host -t mx mydomain.com
mydomain.com mail is handled by 0 aspmx.l.google.com.
Oh... I see.
Sending and sharing files are two of those things that are just now sluggishly rolling over to discover that it's a new millennium.
Dropbox and Drive are making great strides lately and I'm really thankful for it. Using Dropbox to have the same "folder" across three computers is the first time synced sharing ever felt intuitive enough for my (71 year old) father to regularly use, and now he can use this to reliably send larger files to people without any worry of fouling up permissions (that would otherwise be difficult for him to understand).
That's why i just run my own "cloud" on my own premises. If I want to give someone access to a file, I just throw it on my Synology DiskStation and the receiver can get at it via FTP or HTTP client.).
EDIT: I guess it's a moot point if you're already using Gmail.
I use Google's cloud-based services for as much as I can, but it's still not seamless and is annoying when I have to open a new window to access a service run by the same company providing the one in the page I'm on.
Next step: Please allow me to easily save PDF's and other documents directly to Drive from a URL. I shouldn't have to download a file to my device and then upload it to drive.
disclaimer: I work for a Google competitor
From it I was able to figure out what was wrong with the C++ program. Notice that the GPF lists the instructions at CS:EIP (the instruction pointer of the running program) and so it was possible by generating assembler output from the C++ program to identify the function/method being executed. From the registers it was possible to identify that one of the parameters was a null pointer (something like ECX being 00000000) and from that information work back up the code to figure out under what conditions that pointer could be null.
Just from that screenshot the bug was identified and fixed.
One of the suggestions was that the kernel could do more. Solaris-based systems (illumos, SmartOS, OmniOS, etc.) do detect both correctable and uncorrectable memory issues. Errors may still cause a process to crash, but they also raise faults to notify system administrators what's happened. You don't have to guess whether you experienced a DIMM failure. After such errors, the OS then removes faulty pages from service. Of course, none of this has any performance impact until an error occurs, and then the impact is pretty minimal.
There's a fuller explanation here:...
I haven't seen a server without ECC memory for years. I don't even consider running anything in production without ECC memory, let alone VM hypervisors. I find it pretty hard to believe that EC2 instances run on non-ECC memory hosts, risking serious data loss for their clients.
Memory errors can be catastrophic. Just imagine a single bit flip in some in-memory filesystem data structure: the OS just happily goes on corrupting your files, assuming everything's OK, until you notice it and half your data is already lost.
Been there (on a development box, but nevertheless).
Note the section on DRAM scrubbing, which I was reminded of from the original article's suggestion on having the kernel scan for memory errors. (I remember when Sun implemented scrubbing, I believe in response to a manufacturing issue that compromised the reliability of some DIMMs.)
It only does a few things, but it does them exceedingly well. Just like nginx, I know it will be fast and reliable, and it is this kind of crazed attention to detail that gets it there.
Although our production code is written in C, I'm not particularly worried about detecting wild writes, because we use pointer checking algorithms to detect/prevent them in the compiler. (Of course, that could be buggy too...)
What I'm trying to catch are wild writes from other devices that have access to RAM. Anyway, this is far from production code so far, but hashing has already been very successful at keeping data structures on disk consistent (a la ZFS, git), so applying the same approach to memory seems like the next step.
The speed hit is surprisingly low, 10-20%, and when you put it that way, it's like running your software on a 6 month old computer. So much of the safety stuff we refuse to do "for performance" would be like running on top-of-the-line hardware three years ago, but safely. That seems like a worthwhile trade to me...
P.s. Are people really not burning in their server hardware with memtest86? We run it for 7 days on all new hardware, and I figured that was pretty standard...
Now, this may not be such a huge problem in practice because the OS is unlikely to move pages around unless it's forced to swap. But that depends on details of the OS paging algorithm and your server load.
In my experience in more 'agile' firms - startups, web dev shops and so on - it would be very hard to make a scheme like this work well, because of all the grinding bureaucracy, fiddly spec-matching and endless manual testing required, as well as the importance of controlling - and deeply understanding - the whole stack. Nonetheless, for infrastructure projects like Redis, I can see value in having engineering effort put explicitly into making 'prettier crashes'.
Here is a variation which, unless I'm missing something, would be a little simpler still and require less full-memory loops:
1. Count #1's in memory (possibly mod N to avoid overflow).
2. Invert memory.
3. Count #0's in memory.
4 Invert memory.
I think this would catch the same errors (stuck-as-0 or stuck-as-1 bits).
One difficulty is that multiple errors could cancel each other out, at which point you can do things like add checkpoints in the aggregation, or track more signals such as number of 01's vs number of 10's. In the end, this is like an inversion-friendly CRC.
Here are my additional two cents: At least on X86 systems, to check small memory regions without effects on the CPU cache can be implemented using non-temporal writes that will directly force the CPU to write the memory back to memory. The instruction required for this is called movntdq and is generated by the SSE2 intrinsic _mm_stream_si128().
I think the title of the article could be more accurate, considering how much is devoted not to issues about software reliability per se, but to distinguishing between unreliable software and unreliable hardware. I think an implicit assumption in most discussions about software reliability is that the hardware has been verified.
I personally do not think that it is the responsibility of a database to perform diagnostics on its host system, although I can sympathize with the pragmatic requirement.
When I am determining the cause of a software failure or crash, the very first thing I always want to know is: is the problem reproducible? If not, the bug report is automatically classified as suspect. It's usually not feasible to investigate a failure that only happened once and cannot be reproduced. Ideally, the problem can be reproduced on two different machines.
What we're always looking for when investigating a bug are ways to increase our confidence that we know the situation (or class of situation) in which the bug arises. And one way to do this is to eliminate as many variables as possible. As a support specialists trying to solve a faulty computer or program, I followed the same course: isolate the cause by a process of elimination. When everything else has been eliminated, whatever you are left with is the cause.
I'm still all jonesed up for a good discussion about software reliability. antirez raised interesting questions about how to define software that is working properly or not. While I'm all for testing, there are ways to design and architect software that makes it more or less amenable to testing. Or more specifically, to make it easier or harder to provide full coverage.
I've always been intrigued by the idea that the most reliable software programs are usually compilers. I believe that is because computer languages are amongst the most carefully specified kind of program input. Whereas so many computer programs accept very poorly specified kinds of input, like user interface actions mixed with text and network traffic, which is at higher risk of having ambiguous elements. (For all their complexity, compilers have it easier in some regards: they have a very specific job to do, and they only run briefly in batch operations, producing a single output from a single input. Any data mutations originate from within the compiler itself, not from the inputs they are processing.)
In any case, I believe that the key to reliable programs depends upon the a complete and unambiguous definition of any and all data types used by those programs, as well as complete and unambiguous definitions of the legitimate mutations that can be made to those data types. If we can guarantee that only valid data is provided to an operation, and guarantee that each such operation produces only legitimate data, then we reduce the chances of corrupting our data. (Transactional memory is such an awesome thing. I only wish it was available in C family languages.)
One of my crazy ideas is that all programs should have a "pure" kernel with a single interface, either a text or binary language interface, and this kernel is the only part that can access user data. Any other tool has to be built on top of this. So this would include any application built with a database back-end.
I suppose that a lot of Hacker News readers, being web developers, already work on products featuring such partitioning. But for desktop software developers who work with their own in-memory data structures and their own disk file formats, it's not so common or self-evident. Then again, even programs that do rely on a dedicated external data store also keep a lot of other kinds of data around, which may not be true user data, but can still be corrupted and cause either crashes or program misbehaviour.
In any case, I suspect that this is going to be an inevitable side-effect of various security initiatives for desktop software, like Apple's XPC. The same techniques used to partition different parts of a program to restrict their access to different resources often lead to also partitioning operations on different kinds of data, including transient representations in the user interface.
Can a program like Redis be further decomposed into layers to handle tasks focussed on different kinds of data to achieve even better operational isolation, and thereby make it easier to find and fix bugs?
This post is fantastic.....
It appears Linode removed the 768 and 1536 plans, renamed the 1024/2048/4096 plans to 1GB/2GB/4GB, and added an 8GB plan. They also added a row in the table showing CPU priority. The 512 plan is unchanged, as are specs and prices for the other three remaining plans.
Linode's "priority" seems like ex-Slicehost's way of saying "hey bigger machines get a higher proportion"... nothing really useful for figuring out exactly what you're buying.
And you can't ever really figure out what you have: things could be severely over-committed, and you'll never know until you get starved. So you can't just benchmark your way out of it.
Unless that's changed, that would mean that all instances running on a given machine share the same CPU Priority, there will just be fewer instances demanding service from the CPU(s) the larger the plan you have.
...so wondering if that's what CPU Priority means, or if Linode is about to mix instance sizes on same hardware?
And maybe I'm just ignorant on the topic, but what exactly does CPU priority do here? I understand basic linux process priority (like the 'nice' command), but how exactly does CPU priority behave on linode. Searching through their docs, I couldn't find anything.
EDIT: to maybe answer my own question, maybe this is the Xen credit schedule?
For my needs, $30/mo was about as much as I'd spend on a server to host mine and a few friend's blogs, some photos, and some remote services. $40 is too much for me and the lower plan just doesn't have enough RAM to be interesting.
So now my options are 1) find somewhere else, or 2) backup my data and rebuild the box in place.
0 - I manage a few Linode 768s including my own. 768 was a great size for a few small blogs and a low traffic Rails site, or a larger traffic blog.
I'm assuming that meant access to part of a processor, but how does that work with 4 CPU and 16x priority? (I'm working on the assumption that 1x priority ~= 1 core.) Of course, my assumption is probably wrong - just curious how this affects the load on a given server and how the VPS interacts with other VPS's on that node.
So what is the appeal of Linode? That you can upgrade to a faster server quickly?
It's quite easy to get a decent micro HP server (even with SSD storage) within $1000, which would cost $150.00 - $300.00 a month for a equivalent plan on Linode. Suppose you upgrade your server every two years, the monthly cost of the server is less than $50. You get dedicated CPU time and I/O, permissions to managing everything.
Internet bandwidth might be a problem. But let's put ourselves in the 2 or 3 years future. What if you already have Gigabit Internet like Google Fiber for $70/mo?
And you get other benefits for owning a server in your house. Since it's connected to your home LAN, it can be used to help build a smart home, control smart sensors/cameras, or serve as a media server.
Am I missing something here?
(We setup a few vps's with rackspace and have been happy so far.)
EC2 is good but their spin-up time is crap.
Though same-kernel is obviously a security reduction, the speed is far better: I for one can't wait to see more LXC and other lightweight virt stuff being made available with real cgroup-level guarantees.
About CPU priority, Linode never kept it a secret.
For the small VPS (512MB RAM), you get a guaranteed 1/20 of a 4 core XEON processor and it scales linearly with each plan's RAM.
As explained on their FAQ, their machines have 8 cores each and house 40 512MB VPS.
For those that don't remember hackers managed to get root access to several VPS via some Linode vulnerability. Didn't bother to let customers know. Didn't bother to update their status/website. Didn't bother to tell anyone what they've done to fix it. Compare that with CloudFlare:...
Linode continues to be a recurring example of how not to behave as a vendor.
Knowing whether the stock is likely to go up or down is dangerous without having some model of how much it will go up or down. It could have a 90% chance of going up but with a negative expected return (because if it goes up it will only go up slightly but if it goes down it will crash).
I wonder if this guy understands this? His "up/down" language suggests that maybe he doesnt and his group is only making money by "picking up pennies in front of a steamroller".
Monty Hall and the Birthday problem? Are they still asking these old hackneyed interview questions!?
As a whole everything this guy said in the interview sounds a bit naive/old-hat. Are people really still blindly trading the correlation between MSFT and Oil?
The whole interview reads like what a junior statistics graduate thinks quant trading funds do rather than how they actually make money nowadays.
It's like he hasn't even read Fooled by Randomness....
-- zero value added, in other words.[1]
________________
[1] Front running is not the only use of statistics and quants in finance, however.
"Video content is protected with our BrainTrust™ DRM, and is unplayable except by a legitimate owner. All aspects of the platform feature a near-ridiculous level of security."
Near-ridiculous security seems about right.
I'm not saying this is right, necessarily, but I think companies know full well that their DRM scheme will be broken, so it's not really worth investing in an "uncrackable" and costly solution. Instead, the role that DRM play is purely legal -- when the company does decide to go after someone for piracy, the DRM scheme, no matter how simple, provides them with the ability to say that the accused person "broke a lock," rather than simply walking in through an unlocked door. "Entering" vs. "breaking and entering." It's nothing but legal leverage, and effective at that role even if it's not a very strong lock.
Of course, to have this argument hold, a company would never be able to admit that they purposefully implemented weak security -- this would be akin to admitting that their door was unlocked afterall, and would weaken their legal argument. Therefore, there remains a niche in the market for solutions that look secure even if they fundamentally aren't. It's all about lip service.
Of course this is only marginally better and should really have been caught, but there's a huge difference between saying that XORing 12 bytes with RANDOM_STRING is kick-ass DRM and actually having a kick-ass DRM infrastructure that then doesn't work right because of a bug.
If this was any really random looking string, I would be more inclined to assume that this was intentional. By the string being this token, I would guess it's a bug somewhere.
Remember. If RANDOM_STRING was truly random, unique per file and account and only transmitted from the server before playing, then this would be as good an encryption as any.
Given the evidence (complex integration with a non-standard set of open source libs, complex industry area in general), I'd say it's almost certainly an insult to imagine the developer could not have made your life harder if he'd chosen to.
Please, if anything commend the dear fellow, and shame on whoever considered a momentary glimpse of Google Plus limelight worth making this guy's Tuesday morning and ongoing professional reputation much harder earned than it otherwise might have been.
"No good deed goes unpunished"
Ahh..the good old days of SoftICE and w32disassm.
Oh man, the worst was the md5 of some salt + whatever you put in.
If you ever want to see some gems of misuse of cryptography for DRM management, let me know - email's in my profile.
Some examples: Using RSA 1024 bit keys, with exponent of 3...
Of course, if a copy protection system was "effective" it wouldn't need a law prohibiting its circumvention. Conversely, if a copy protection system is circumventable, it's not effective.
[1] Assuming a general computation device, not a dedicated hardware player.
Either way.. wow... XOR encryption with just such a short repeating string! I bet it wouldn't be too hard to decrypt it even without the original file, since the file signature alone would probably be longer than the string. DISCLAIMER: I'm just speculating, I don't know the .mov specs.
The problem is marketing folks getting carried away when describing these "technology solutions" to the content owner, because that's what they (as well as VCs) want to hear.
Disclaimer: cofounded a video CDN+DRM provider more than a decade ago, developed many content protection methods over the years.
Now that I read the article twice, I literally got a panic attack when I realized that it wasn't a random string that they were xor'ing their data with, but a string called "RANDOM_STRING". Although it sounds bad, one must realize that this is not security by obscurity since the key has been leaked, and nobody guarantees encryption against a leaked key.
Isn't VLC licensed under the GPL? Or at least was until very recently?...
Is/was Leaping Brain violating the license?
EDIT: the wrapper script is apparently released under the GPL too:
It might be a good idea to remove their names, to protect their reputation. ;-)
As far as I recall the Adobe PDF encryption was also just some XOR with a simple passphrase. Got him into serious trouble.
And WTH is 'virtually uncrackable'?
But did the NYTimes just point to California's current budget as a sign of our State's "economic stability"?
They quote that the "California Legislative Analyst's Office projecting . . . . [that] California might post a $1 billion surplus in 2014"
That is such an irresponsible thing to publish - a "surplus" makes it sound like the government is totally on top of the situation here and a model for success.
The reality is that CA would be going off the @!@W$! rails right now in 2012 if voters hadn't just passed Prop 30.
Prop 30 was an EMERGENCY, retroactive (from Jan 1 2012), and "short-term" (7 year) tax on high income earners + a moderate increase in sales tax which is going to raise $6B a YEAR to pull CA's ass out of the fire.
So while the gloom may be lifting, I remain pessimistic that our state gov't has any long-term solutions to their continued budgetary missteps.
I wonder what jgrahamc has to say about Cloudflare shielding sites used to ddos people.
Instead, go with what your distribution gives you. The people who put your favourite distribution together work on making the system safe and secure as a whole. People who don't think it is safe and secure file bugs and they get fixed. And you have one place to get all your updates in case fixes are needed.
If you start adding third party sources, you're on your own as to managing any implications of the way you've put it together. Just because each individual component is safe and secure doesn't mean that it is as a whole. For example, Ubuntu add hardening (AppArmor) for various server daemons which you won't get if you just download apache from the project website.
If you need a guide on putting a system together yourself, then you aren't someone who can manage these implications yourself, and you're trusting the guide author in not having made any mistakes. Are you really in a position to judge his competence?
Just use your distribution's standard web server and you'll get your safe and secure Web server in one command.
Virtualization was build for server providers to make easy money, not for server owners to gain performance advantages.
Vistualization is not for production. Production servers need less code, not more.
It is the same kind of mistake as JVM - we need less code, integrated with OS, not more "isolated" crapware which needs networking, AIO and really quick access to the code from shared libraries.
And, of course, a setup without middle-ware (python-wsgi, etc) and several storage back-ends (redis, postgres) is meaningless.
Update:
Well, production is not about having a big server which is almost always 100% idle, and can be partitioned (with KVM, not a third-party product) to make a few semi-independent virtual servers 99% idle. This is virtual, imaginary advantage.
On the other side, your network card and your storage system cannot be partitioned efficiently, despite all they say in commercials. And that VM migration is also nonsense. You are running, say, a MySQL instance. Can you migrate it without a shutdown and then taking a snapshot of an FS? No. So, what migration you're talking about? It is all about your data, not about having a copy of a disk-image.
It is OK to partition development, or 100% idle machines - like almost all those Linode instances, which have a couple of page request in a day - this is what it was made for, same as old plain Apache virtual-hosting. But as long as one needs performance and low latency, all the middle-men must go away.
I sense a little bit of bias.
As a multiplatform developer I can think of a number of reasons why someone might opt to go the Windows Server route. ASP.NET MVC 4 is a first class framework and many prefer it over other popular alternatives on other platforms such as Django, Rails, and Cake. In addition, Visual Studio is arguably the best IDE available and publishing to an IIS server is dead simple.
As for cost, full versions of Visual Studio and Windows Server can both be obtained for free through the DreamSpark program for college students and through the similar BizSpark program for startups and small businesses.
as well as doing chkrootkit.
Also a good idea to have your log files backed up somewhere else where your server does not have sufficient access to delete (or modify) them.
Also if you have multiple web apps running, chroot them if at all possible so that if something does break out it can't (so easily) wreak havok over your entire filesystem.
If you are using PHP also bare in mind that a common default is for all sessions to be written to /tmp which is world read and writeable. So if others have access to your server they can steal or destroy sessions easily.
I also didn't see mention of an update strategy for security updates. You can use apticron to email you with which updates are available and which are important for security.
You can set updates to go automatically (I recommend security only) but if you are more cautious
you might want to test on a VM first. But keep an eye on them! This is very important, especially if you are managing wordpress etc through apt.
And so many other things that I have probably forgotten.
Having some form of audit (that tripwire can provide) is vital in those "oh fuck" moments where something doesn't seem quite right and you start wondering if you have been pwned but have no real way of actually knowing.
If you have a small server, I'd really recommend checking out these scripts that assist with configuring and setting up a server very quickly:
I personally used a fork of lowendscript last year to set up some servers, but if I had to set up a new server today, I'd check out some of the other other options at that link, like Minstall:
But this Xeoncross lowendscript fork is still very active:
Why? Because it doesn't really mention a technical choice against MS.
Don't get me wrong i would never ever use Windows Server but when i'd write such an article i'd have to find at least a few technical pros and cons for the choices i preset.
"Uhhh, the internet is more like a unixy thing" doesn't cut it.
This goes on with the choice for Ubuntu Server. Why? Is it an article about "safe and secure web server" or about "how does my grandma set up a server"?
There are much more choices in terms of reliability and proven track record like FreeBSD, OpenBSD, Debian, RHEL/CentOS. The choice was made because it's easier to set up and apparently the author is too lazy to _really_ do his homework.
In the end, i'd say if the articles title would be "beginners guide how to setup a server" i wouldn't comlain.....
Has ECC RAM support. Takes 4 3.5" hard disks, and runs very quiet and cool.
I feel like doing this stuff by hand should be considered insecure and outdated.....
also, chrooted sftp-only accounts...
All the nginx setup and config things are REALLY useful, all the more regarding the poor quantity/quality of resources one can find out there. Really useful to me. I wish I had one guide like this when I setup my own webserver.
I made a tl;dr version but the main interesting parts stay all the nginx tricks for me.
can't believe that hasn't been on HN before, so added it:
|
http://hackerbra.in/news/1354100521
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
I want post json request like that
{"jsonrpc": "2.0", "method": "testApi", "params": {
"message": "abc"
}
, "id": 1}
I read posts:
How to POST raw whole JSON in the body of a Retrofit request?
but i can'not found class TypedInput, TypedByteArray, TypedString in my retrofit2 package. Where is?
To POST a body in
Retrofit, you create an object that represents this body, a class that includes
String jsonrpc,
String method, etc. Then, pass this object to the method that you define in your service interface and has a param with
@Body annotation.
Here is an example for POST body object:
public class PostBody{ String jsonprc; String method; Param param; public PostBody(...){ //IMPLEMENT THIS } ... class Param{ //IMPLEMENT THIS } }
|
https://codedump.io/share/XBVewsCbtJa1/1/post-raw-json-in-body-retrofit2
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
By now you’ve certainly heard of the Kinect sensor, and very likely about the freshly-released Kinect SDK developed by some of our colleagues here at Microsoft Research. We wanted Sho users to be able to get in on the fun too, so we’ve worked out a bit of glue code (kinect.py, included at the end of this article) to make it easy for you to get at Kinect data from Sho. In the usual Sho way, you won’t need to compile any code; just a few lines of script will get you going. With only 70 or so lines of code you can create the sorts of fun animations you’ll see below!
As you may know, the Kinect is a pretty complex sensor, and the SDK adds even more layers of sophistication. The hardware itself is comprised of a depth sensor and a microphone array; the SDK extracts skeletons of multiple people in the scene, high-quality audio output, and sound source localization (estimates of where the sound is coming from). In this post, we’re only going to work with the skeleton data; if there’s demand (i.e., if we hear from you!), we can add more posts to get the other sorts of data as well.
It’s often best to start with a demo. Let’s start with something basic: let’s use Sho’s built-in plotting functionality to just plot all the available skeletons and label all the individual joints. Kinect.py has a KinectTracker class which uses a polling model to get skeleton information. The class’ skeletonFrame member will contain the most recent set of skeletons (or None if no skeleton has been found yet), and skeletonUpdateTime will be set to the last time the skeletons were updated. To make it a bit more fun, we’ll have the skeletons update in real time within the plot (see the video). The video shows Sho’s new double buffering feature for plots, which will appear in Sho 2.0.5 (to be released in the next few weeks); in the current version of Sho you’ll see some “flashing” when updating the plot.
The next question, of course, is how we did that and how much code it took. Fortunately it’s quite simple and not very much code at all:
def bodyplot(kinecttracker):
def plotjointseq(jointlist): # helper func to plot a set of lines through a series of jointnames x = [jointhash[jointname][0] for jointname in jointlist] y = [jointhash[jointname][1] for jointname in jointlist] plot(x,y,'k-')
# set up plot clearplot() p = plot([1],[1]) # double buffering mode - uncomment the below for Sho 2.0.5+ # p.Freeze = True lastplottime = System.DateTime.Now jointhash = System.Collections.Hashtable() hold(True)
# keep doing this until the user hits Ctrl-Shift-C in the console while(True): System.Threading.Thread.Sleep(0) if kinecttracker.skeletonFrame: clearplot() p.HorizontalMajorGridlinesVisible = False axisranges(-.8,.8,-.8,.8)
# iterate through all found skeletons for skeleton in kinecttracker.skeletonFrame.Skeletons: # only draw valid skeletons if skeleton.TrackingState == SkeletonTrackingState.Tracked: x = [] y = [] labels = []
# iterate through joints and store their # location in jointhash for joint in skeleton.Joints: # extract the jointname from the id, where the id is # something like: # Microsoft.Research.Kinect.Nui.JointID.FootRight jointname = joint.ID.ToString().split(".")[-1] z = joint.Position.Z + 1e-10 # project x location into viewplane using z currx = joint.Position.X/z x.Add(currx) # project y location into viewplane using z curry = joint.Position.Y/z y.Add(curry) labels.Add(jointname) # store the projected locations and the original x,y,z jointhash[jointname] = (currx,curry,\ joint.Position.X,\ joint.Position.Y, z)
# draw joints with joint labels plot(x,y,'b.',size=10, labels=labels)
# draw lines connecting joints plotjointseq(["HipCenter","Spine",\ "ShoulderCenter","Head"]) plotjointseq(["ShoulderCenter","ShoulderLeft",\ "ElbowLeft","WristLeft","HandLeft"]) plotjointseq(["ShoulderCenter","ShoulderRight",\ "ElbowRight","WristRight","HandRight"]) plotjointseq(["HipCenter","HipLeft","KneeLeft",\ "AnkleLeft","FootLeft"]) plotjointseq(["HipCenter","HipRight","KneeRight",\ "AnkleRight","FootRight"])
# compute the framerate and use a label to display it framerate = 1000.0/\ ((System.DateTime.Now\ -lastplottime).Milliseconds) plot([-.7],[.7],labels=[str(round(framerate))+" fps"]) lastplottime = System.DateTime.Now
# set the size of the graph axisranges(-.8,.8,-.8,.8)
# in double buffering mode, update the view # (uncomment below for Sho 2.0.5+) # p.Render()
That example demonstrates all the important stuff: how to get at individual joints and their 3D locations, as well as how the joints are connected together. From there, it’s easy to add a few more lines of code and create simple virtual 3D objects. In the example below, I’ve created a simple staff object with a fixed 3D length that’s centered between the two hands. The code below can be inserted right before the “draw joints” section in the bodyplot() function above:
# draw red staff handleftpt = DoubleArray.From(jointhash["HandLeft"]) handrightpt = DoubleArray.From(jointhash["HandRight"]) # compute center point between hands handcenterpt = (handleftpt+handrightpt)/2.0 # compute vector along staff direction staffvec = handrightpt-handleftpt # normalize it to unit length staffvec = staffvec/(norm(staffvec)+1e-10) # compute the left and right end of the staff # it's two units long and .05 units wide, centered between the hands staffleftpt = handcenterpt -staffvec*1.0 staffrightpt = handcenterpt + staffvec*1.0 # if the hands are close enough to each other, draw the staff if norm(handrightpt[2:]-handleftpt[2:]) < 0.7: plot([staffleftpt[2]/staffleftpt[4],staffrightpt[2]/staffrightpt[4]],\ [staffleftpt[3]/staffleftpt[4],staffrightpt[3]/staffrightpt[4]],'r-') plot([staffrightpt[2]/staffrightpt[4],staffrightpt[2]/staffrightpt[4]],\ [staffrightpt[3]/staffrightpt[4],(staffrightpt[3]+0.05)/staffrightpt[4]],'r-') plot([staffleftpt[2]/staffleftpt[4],staffleftpt[2]/staffleftpt[4]],\ [staffleftpt[3]/staffleftpt[4],(staffleftpt[3]+.05)/staffleftpt[4]],'r-') plot([staffleftpt[2]/staffleftpt[4],staffrightpt[2]/staffrightpt[4]],\ [(staffleftpt[3]+.05)/staffleftpt[4],(staffrightpt[3]+.05)/staffrightpt[4]],'r-')
When the hands get within a certain distance of each other, the staff (magically) appears, and our little stick figure friend can do some fancy tricks. A screenshot is below, but you’ll want to click here to see the video.
The glue code that gets the SDK working in Sho, kinect.py, is also quite short, but what it does is a little complicated. Since the managed layer of the Kinect SDK depends on the WPF threading model and message pump, we have to create a shim WPF application and window that will get the events. That application has a callback that’s called every time it gets new skeleton data; inside that callback we save the skeletonFrame information and the skeletonUpdateTime in the KinectTracker object, which we can access outside of the shim application’s thread. The entirety of kinect.py is below; note you may need to update the paths to point to wherever you installed the SDK:
# kinect.py
from sho import * import System, clr clr.AddReference("PresentationFramework") clr.AddReference("PresentationCore") clr.AddReference("WindowsBase") clr.AddReference("System.Xaml") addpath("C:\Program Files (x86)\Microsoft Research KinectSDK") ShoLoadAssembly("Microsoft.Research.Kinect.dll") from Microsoft.Research.Kinect.Nui import *
class KinectHelperWindow(System.Windows.Window): def __init__(self): self.nui = None self.Width = 1 self.Height = 1 self.Visibility = System.Windows.Visibility.Hidden self.Loaded += System.Windows.RoutedEventHandler(self.Init)
def SkeletonFrameReady(self, obj, skeletoneventargs): self.kinectinfo.skeletonUpdateTime = System.DateTime.Now self.kinectinfo.skeletonFrame = skeletoneventargs.SkeletonFrame
def Init(self, obj, rea): self.nui = Runtime() self.nui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex |\ RuntimeOptions.UseSkeletalTracking |\ RuntimeOptions.UseColor); self.nui.VideoStream.Open(ImageStreamType.Video, 2,\ ImageResolution.Resolution640x480, ImageType.Color); self.nui.DepthStream.Open(ImageStreamType.Depth, 2, \ ImageResolution.Resolution320x240, \ ImageType.DepthAndPlayerIndex); self.nui.SkeletonFrameReady += \ System.EventHandler[SkeletonFrameReadyEventArgs]\ (self.SkeletonFrameReady)
class KinectTracker: def __init__(self): self.skeletonFrame = None self.skeletonUpdateTime = None pass
def StartKinect(self): self.app = System.Windows.Application() khw = KinectHelperWindow() khw.kinectinfo = self self.helperwindow = khw self.app.Run(khw)
def startKinectTracker(): kt = KinectTracker() kt.shothread = ShoThread(kt.StartKinect, "kinectapp", \ System.Threading.ApartmentState.STA) kt.shothread.Start() return kt
To develop your own Kinect demos, you’ll have to start by getting yourself a Kinect sensor. I’d recommend the standalone sensor, since it comes with a power supply; if you get the package with an xbox 360 you’ll need to buy a separate power supply as it only comes with a cable to connect to the 360 console. Once you have the device, you’ll need to install the Kinect SDK. Once you’ve done that, reboot your machine, plug in your kinect, and you should be ready to go.
In order to use the sensor from Sho, you’ll have to use the 32-bit console, shoconsole32, even on a 64-bit machine, as although the drivers work on both 32-bit and 64-bit machines, the managed SDK only supports calls from 32-bit contexts. You’ll also want to save the glue code above as kinect.py. Once in the console, import it and create a KinectTracker object with startKinectTracker:
>>> import kinect >>> kt = kinect.startKinectTracker()
Once a skeleton is found in the sensor’s range, its data will be copied to kt.skeletonFrame and you can use it as in the demos above.
So there you have it; you now know how to talk to the Kinect SDK from Sho using the wrapper in kinect.py above. I’m sure you’ll do far more creative things than adding a red staff to a stick figure, and we’d love to hear about it. Please send us links to your own demo videos in the comments!
|
http://blogs.msdn.com/b/the_blog_of_sho/archive/2011/06/16/connecting-to-kinect-from-sho.aspx
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Content
All Articles
Python News
Numerically Python
Python & XML
Community
Database
Distributed
Education
Getting Started
Graphics
Internet
OS
Programming
Scientific
Tools
Tutorials
User Interfaces
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
Getting Started with WSGI
Pages: 1, 2, 3
To extend these simple examples into something a little more realistic, I'll implement an extremely basic blogging application along RESTful lines: using HTTP GET to retrieve a single entry or a list of entries, PUT to add or update an entry, and DELETE to remove one.
The first step is to extend the BaseWSGI class slightly to handle GET requests in one of two ways: GET / should return a list of all entries, while GET [name] should return a named entry. To provide this, I've added code to the __iter__ method so that when the path requested is /, the text ALL gets appended to the method (meaning a subclass now needs to implement both do_GET and do_GETALL):
BaseWSGI
GET /
GET [name]
__iter__
/
ALL
do_GET
do_GETALL
if request_method == 'GET' and self.environ['PATH_INFO'] == '/':
method = method + 'ALL'
At this point, I've decided to store the weblog entries as plain-text files, with nothing in the way of metadata for ordering or filtering. Obviously, in a real application you'd want to be able to search for entries based on particular criteria--perhaps by exposing more meaningful or useful resource URLs (for example, something like /2006/08/my-entry-name)--but for the purposes of this basic application, file-system storage will suffice. Thus, data access for a blog entry is as simple as:
/2006/08/my-entry-name
class Entry:
def __init__(self, path, filename, load=True):
self.filename = os.path.join(path, filename.replace('+', '-')) + '.txt'
self.title = filename.replace('-', ' ')
if load and os.path.exists(self.filename):
self.text = file(self.filename).read()
def save(self):
f = file(self.filename, 'w')
f.write(self.text)
f.close()
Presenting entries needs some kind of templating. Python has an abundance of choices, such as Cheetah, Kid, and Myghty, not to mention numerous others bundled with the various frameworks. To keep things simple, I'm using a homegrown templating engine that simply injects dynamic content based on the IDs in an XML document. (Given the constraint that all IDs must be unique, this is probably the simplest approach to templating XML, at least from a usage perspective.) Thus, the do_GET method of my application becomes:
def do_GET(self):
pathinfo = self.environ['PATH_INFO'][1:]
entry = Entry(blogdir, pathinfo)
if entry.text:
(ext, content_type) = self.get_type()
response_headers = [('Content-type', content_type)]
if self.status_override:
status = self.status_override
else:
status = '200 OK'
self.start(status, response_headers)
tmp = self.engine.load('blog-single.' + ext)
tmp['entry:title'] = entry.title
tmp['entry:text'] = entry.text
tmp['entry:link'] = template3.Element(None,
href='' % (entry.title.replace(' ', '-'), ext))
return str(tmp)
else:
self.start('404 Not Found', [('Content-type', 'text/html')])
return '%s not found' % pathinfo
Using the PATHINFO HTTP variable provided by wsgi, I load an entry, then check to see if the text exists; if not, the blog file was not present, so I return a standard 404 Not Found. If the entry loaded successfully, the get_type() method returns the extension to use for the template (and the content type) based on a type parameter passed in the URL. I create the response headers (just content type, for the moment), and start the response process by calling self.start. At this point I've also checked for the presence of status_override, which is a field used when another method calls do_GET (see the do_PUT method later). Finally, I set the content in the template using the IDs: entry:title, entry:text and entry:link. (I'll return to the do_GETALL method shortly.)
def do_PUT(self):
pathinfo = self.environ['PATH_INFO'][1:]
if pathinfo == '':
self.start('400 Bad Request', [('Content-type', 'text/html')])
return 'Missing path name'
elif not self.environ.has_key('CONTENT_LENGTH') or self.environ['CONTENT_LENGTH'] == '' \
or self.environ['CONTENT_LENGTH'] == '0':
self.start('411 Length Required', [('Content-type', 'text/html')])
return 'Missing content'
entry = Entry(blogdir, pathinfo)
if not entry.text:
self.status_override = '201 Created'
entry.text = self.environ['wsgi.input'].read(int(self.environ['CONTENT_LENGTH']))
entry.save()
return self.do_GET()
For a DELETE, I just do the basics: check to see if the entry exists, delete and return a 204 Deleted:
def do_DELETE(self):
pathinfo = self.environ['PATH_INFO'][1:]
blogfile = os.path.join(blogdir, pathinfo.replace('+', '-')) + '.txt'
if os.path.exists(blogfile):
os.remove(blogfile)
self.start('204 Deleted', [ ])
return 'Deleted %s' % pathinfo
else:
self.start('404 Not Found', [('Content-type', 'text/html')])
return '%s not found' % pathinfo
The do_GETALL method, which is the only one of the subclass methods that doesn't actually correspond to an HTTP verb, is also the only one that differs from the validation+response cycle established by the other methods. do_GETALL will always return 200 OK, and will read in all .txt files in the specified blog directory, reusing the blog-single template (used in the do_GET method). The main differences between this method and do_GET revolve around templating (and are not particularly relevant to WSGI).
blog-single
If I were creating a typical GET/POST web application, testing would be straightforward: use a browser. Because I've used REST semantics, I need to use another tool--in this case, Curl--to test all my application's features. The first step is to start up the blog using python blog.py, and then:
python blog.py
curl -v -X PUT -d @-
-d @-
Ctrl
curl -v
curl -v -X DELETE
I've included three template types: .xhtml for HTML viewing, .xml for simple XML output, and .atom to produce an Atom feed. Test these different templates by calling:
curl -v
curl -v
curl -v
So far I've only demonstrated how to set up a basic, stateless application by extending the foundations provided by WSGI. If you're thinking about larger-scale web application development, the recommended approach is undoubtedly to choose a suitable framework. This is not to say that developing such a webapp is impossible using basic WSGI, but you'll need to add (by hand) a lot of the technology that you get for free with a framework--either by writing your own, or plugging in third-party middleware.
The WSGI perspective on middleware is an important part of the specification. Adding middleware involves wrapping layers of utility code around a base app to provide additional functionality; the PEP calls this a middleware stack. For example, to provide authentication facilities, you might wrap your application with BasicAuthenticationMiddleware; to compress responses, you might wrap it with another middleware component called CompressionMiddleware; and so on.
from paste.session import SessionMiddleware
class myapp2:
def __init__(self, environ, start_response):
self.environ = environ
self.start = start_response
def __iter__(self):
session = self.environ['paste.session.factory']()
if 'count' in session:
count = session['count']
else:
count = 1
session['count'] = count + 1
self.start('200 OK', [('Content-type','text/plain')])
yield 'You have been here %d times!\n' % count
app2 = SessionMiddleware(myapp2)
In this example, SessionMiddleware wraps myapp2. When a request comes in, SessionMiddleware adds the session factory to the environ with the key paste.session.factory, and when invoked in the first line of the __iter__ method, the session is returned as a simple dict. A stack of middleware components added to a basic WSGI application means you can have the benefits provided by many of the frameworks, without necessarily having to constrain yourself to a framework..
|
http://www.onlamp.com/pub/a/python/2006/11/09/getting-started-with-wsgi.html?page=2
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
8
results of 8
Hmm... Yes it does now appear to be working. I'll let you know if I run
into any problems.
FYI...
I've disabled the touch, get and mail tests.
The get test does work fine but downloads about a meg of information
which is a bit slow and causes my firewall to report that a modified
version of nant.exe is trying to access the internet.
The mail test/task just doesn't appear work on my machine.
I think I'll be making two test suites. One that gets run for each
build. The other that is more "intensive" that is designed to be run as
needed, i.e., before cvs commits & releases.
The touch test/task is failing.
This is a shame since Jay is the only one that has submitted unit tests
:(
If we get a cvs task than nant will be self sufficient to rebuild itself
given a binary distribution and the build file. That would be pretty
cool.
My main priority is to get a stable version ready and be in a position
to make another release when the final .NET SDK becomes available.
Ps, I've also cleaned up how xml errors are reported in the build file.
If you capture the output with a text editor you should now be able to
double click on the error line to jump to the error, i.e., its in the
same format as all the other build errors.
> -----Original Message-----
> From: Ian MacLean [mailto:ianm@...]
> Sent: December 19, 2001 3:16 PM
> To: Gerry Shaw
> Subject: Re: nunit status
>
>
>
>
> Gerry Shaw wrote:
>
> >I've been out of the nant loop for a bit due to final exams and some
> >school but would like to get back into development. The
> biggest issue
> >that I have right now is with Nunit. Now I'm pretty sure its not
> >because of what I've checked in over the last couple of days
> but when I
> >do:
> >nant test
> >
> >It bombs with a caught thrown exception in the test runner. Is this
> >because we have not found a solution to the nunit problem?
> >
> Where exactly is the exception happening ? I found a problem
> where the
> code I checked in had a line commented out that shouldn't
> have ( to do
> with loading the correct nAnt.Core.dll ). Now that thats
> fixed it seems
> to be working for me. I nuked my nant directory - did a clean
> and build. It successfully loads both NAnt.core.dll's and
> runs the tests
> against the correct one. I ensured my build Nant did not have
> the touch
> and get tasks.
>
> If I take the modifed Nant.code and put it in the build directory the
> test fails because it knows that its not the right NAnt. Ie I
> get this
> warning "WRN: Comparing the assembly name resulted in the mismatch:
> Revision Number" which is correct behaviour. the test should
> only work
> agains a version of an assembly it was build against.
>
> The other difference is that I'm in a RC build. Maybe thats
> an issue. If
> you're still seeing the same problem and I can't repro should we make
> the decision to move to the RC version in cvs ? I have the cds if you
> need them,
>
> Ian
>
> >
> >
> >
>
For people not using CVS you can now easily grab the latest cvs contents
(as of 3am of the current day) at:
There is a link from the main web site.
Thanks to Jason @ NDoc for sending me the scripts.
I was planning on writing this - based on your library. I might as well=20
start now
Ian
Mike Kr=FCger wrote:
>Hi
>
>What about a cvs task ? I've a CVS communication library capabale of doi=
ng
>and other stuff (look at NCvs it uses this library to communicate with t=
he
>CVS server). It would
>be platform independend too ...
>
>cya
>
>
>_______________________________________________
>Nant-developers mailing list
>Nant-developers@...
>
>
>
Hi
What about a cvs task ? I've a CVS communication library capabale of doing
and other stuff (look at NCvs it uses this library to communicate with the
CVS server). It would
be platform independend too ...
cya
> Do we want to look at having NAnt scan a directory looking for valid
> task assemblies rather than having to use taskdef every time theres a
> new one ?
I think so but until that's implemented this is the way to have it work
now. The change is pretty easy to make, just call Project.AddTasks()
with the names of assemblies in a specified subfolder. Make sure you
catch exceptions so that when you try to load a standard C .dll instead
of a .NET exception it doesn't blow up (this happened to me once already
:)
I would think that if we looked in a subfolder off of nant.exe called
Tasks and any subfolders inside of that one we should use any tasks that
we find.
Gerry Shaw wrote:
"/>
>
Do we want to look at having NAnt scan a directory looking for valid
task assemblies rather than having to use taskdef every time theres a
new one ?
Ian
There is a new nant.exe and nant.core.dll in cvs that has the zip task
built in. The nant build file now uses this built in task rather than
caling the external zip.exe program."/>
See the UserTask example for more detail.
The reasoning for this is that I want to keep nant totally portable so
that when Mono starts running under linux nant will easily move to that
platform.
I'd still like to include the code in the distribution so I've made a
subfolder under src called Extras where I plan to add these sorts of
tasks. I don't have source safe on my machine so if you could do a get
from cvs and see if this all works I'd appreciate it. If you don't have
cvs access let me know and I'll email you a zip of the project.
My next task is to get nightly .zip's of the cvs tree generated...
Re: underscores, as soon as a large body publishes a coding convention
that isn't too wacked I'll happily adopt it. I find the _ character is
simpler than m_ and helps distinguish between class fields and local
variables better than having to use 'this.' type of syntax.
Gerry
> -----Original Message-----
> From: Jason Reimer [mailto:jpreimer1@...]
> Sent: December 16, 2001 4:04 PM
> To: Gerry Shaw
> Subject: RE: NAnt project
>
>
> Hi,
>
> I got some free time and I updated the source code to
> conform with the projects conventions. If I may be so
> bold, drop the _ before the variable names. Always
> have hated 'em. I did however put them in the code to
> be consistent.
>
> I also regenerated a new interop dll with the name
> SourceSafe.Interop.dll.
>
> If you have any questions, please let me know.
>
> Jason
>
>
>
> --- Gerry Shaw <gerry_shaw@...> wrote:
> > Great, I don't have SourceSafe so I can't test it
> > but I was wondering if
> > you could make the following small changes:
> >
> > 1. Rename the interop dll to something a bit
> > smaller, say
> > Interop.SourceSafe.dll, whatever makes sense but
> > something short and
> > clear.
> >
> > 2. Remove the VSS prefix from everything and place
> > all the class in a
> > new namespace called
> > SourceForge.NAnt.Tasks.SourceSafe. In general
> > avoid abbreviations and use SourceSafe instead of
> > VSS. So
> > SourceForge.NAnt.VSSBase would become
> SourceForge.NAnt.SourceSafe.Base
> > (or BaseTask).
> >
> > If you don't think you'll get that done within say a
> > week let me know
> > and I'll post the changes as is and make them myself
> > in the future.
> >
> > This is a great contribution.
> > Thanks!
> >
> >
> > > -----Original Message-----
> > > From: Jason Reimer [mailto:jpreimer1@...]
> > > Sent: December 10, 2001 3:59 PM
> > > To: Gerry Shaw
> > > Subject: Re: NAnt project
> > >
> > >
> > > Hi again,
> > >
> > > I've finished a set of 4 tasks for use with Visual
> > > Source Safe. These are vssget, vsslabel,
> > vsscheckin,
> > > vsscheckout. I attached a zip file that contains
> > these 4
> > > plus a base abstract class and a COM interop dll
> > to the
> > > Source Safe interface. They are not a direct port
> > of the
> > > Java tasks, either from a interface or
> > implementation
> > > perspectives, but they contain most if not more
> > than the
> > > functionality the Ant tasks. I have documented
> > the tasks and
> > > attributes fairly well using the doc comments. I
> > may still
> > > add additional attributes in the future, and if I
> > do so I
> > > will send you those updates. I've done a fair
> > amount of
> > > testing on these also, and I believe they are
> > stable.
> > > Honestly, you and the others did a great job with
> > the
> > > foundation, and it was very simple to build these
> > tasks, so
> > > consequently are not that complex to debug.
> > > I was thinking of going back and adding a
> > > StringValidatorAttribute class (for string length
> > > validation) for some of this stuff, but that's
> > just a
> > > bell and whistle.
> > >
> > > I hadn't looked at your coding standards
> > completely
> > > before I started coding, so some of the variable
> > > naming conventions vary from the existing tasks.
> > If I
> > > get some time, I will try and go back and change
> > this
> > > to be more uniform.
> > >
> > > If you have any questions, please let me know.
> > >
> > > Thanks,
> > >
> > > Jason
> > >
> > >
> > > --- Gerry Shaw <gerry_shaw@...> wrote:
> > > > I believe somebody is working on some cvs tasks
> > but
> > > > AFIK nobody has done the VSS tasks. If you
> > could
> > > > write a working task for VSS I'd love to include
> > it
> > > > in
> > > > the distribution. I'm sure many others would
> > find
> > > > it
> > > > quite useful.
> > > >
> > > > --- Jason Reimer <jpreimer1@...> wrote:
> > > > > Hi,
> > > > >
> > > > > I got your email from the source forge web
> > site,
> > > > > after
> > > > > looking at NAnt. I have noticed (or at least
> > I
> > > > > can't
> > > > > find any) that you do not have any tasks for
> > > > source
> > > > > control systems yet. I am most interested in
> > > > doing
> > > > > automated builds, while integrating with
> > Visual
> > > > > Source
> > > > > Safe. I was going to port the Ant VSS tasks
> > to C#
> > > > > for
> > > > > this purpose, and was wondering if you would
> > like
> > > > me
> > > > > to send you the code, or contribute to the
> > project
> > > > > in
> > > > > some way.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jason Reimer
> > > > >
> > > > >
> > __________________________________________________
> > > > > Do You Yahoo!?
> > > > > Find the one for you at Yahoo! Personals
> > >
> > > >
> > > >
> > > >
> > > __________________________________________________
> > > > Do You Yahoo!?
> > > > Yahoo! GeoCities - quick and easy web site
> > hosting,
> > > > just $8.95/month.
> > > >
> > >
> > >
> > > =====
> > > Jason P. Reimer
> > > jpreimer1@...
> > >
> > > __________________________________________________
> > > Do You Yahoo!?
> > > Send your FREE holiday greetings online!
> >
> > >
> >
>
>
> =====
> Jason P. Reimer
> jpreimer1@...
>
> __________________________________________________
> Do You Yahoo!?
> Check out Yahoo! Shopping and Yahoo! Auctions for all of
> your unique holiday gifts! Buy at
> or bid at
>
|
http://sourceforge.net/p/nant/mailman/nant-developers/?viewmonth=200112&viewday=19
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Agenda
See also: IRC log
<ArtB> Scribe: Josh_Soref
<ArtB> ScribeNick: timeless
ArtB: When you do introductions, please indicate if you are not a WG member
plh: as the charter has been reupped, most people are not WG members
chaals: if you were a WG member and haven't reupped, please nag your AC rep
ArtB: Josh_Soref is a fantastic
scribe, he works for RIM
... RIM is not a member
<plh> WG participation status
ArtB: when you speak for the first time, please introduce yourself
chaals: when Josh_Soref says
stop, you have to stop, because you'll be lost otherwise
... I'm chaals, Opera, I'm a chair
ArtB: I'm ArtB, from Nokia, I'm a chair
Paul_Kinlan: I'm PaulKinlan from Google, registered as observer, now member
ericu: I'm ericu from Google, I'm a member
glenn: glenn, Cox, member
DanD: Dan Druta, AT&T, member
Arnaud: Arnaud Braud, France Telecom, member
bryan: Bryan Sullivan, AT&T, member
Russell_Berkoff: Russell Berkoff, Samsung, Observer
aklein: Adam Klein, Google, Observer
rafaelw: Rafael Weinstein, Google, Observer
tross: Tony Ross, Microsoft, Member
rniwa: Ryosuke Niwa, Google, Member
MikeSmith: Mike Smith, W3C Team,
Member
... the first one to rejoin
PaulC: Paul Cotton, Microsoft, Chair of HTML WG, your host
anne: Anne, Opera, Member
ordinho: Odin Horthe Omdal, Opera, Member
Travis: Travis Leithead, Microsoft, Member
shan: Soonbo Han, LG Electronics, just joined [and was dropped by recharter]
ArtB: there's an Action for someone to bug ACs for rejoins
<chaals> ACTION: chaals to bug AC reps of ex-members to re-join after new charter [recorded in]
<trackbot> Created ACTION-643 - Bug AC reps of ex-members to re-join after new charter [on Charles McCathieNevile - due 2012-05-08].
magnus: Magnus Olsson, Ericsson, Member (need to rejoin)
krisk: Kris K, Microsoft, Member
plh: Philipe Le Hegaret, W3C Team, Member
scheib: Vincent Scheib, Google, Member
dglazkov: Dimitri Glazkov, Google, Member
<MikeSmith> trackbot, start meeting
<trackbot> Meeting: Web Applications Working Group Teleconference
<trackbot> Date: 01 May 2012
<magnus> present Magnus_Olsson (magnus)
ArtB: We always preallocate an
item or two
... and then figure out the rest as we meet
... we have a couple of topics
... we had penciled Intents for 1-2pm
... James_Hawkins was going to manage that
PaulC: The preallocated name
badges were to help the secretary
... just register at the desk
... if you have problems, let me know
ArtB: Here is the list of
potential topics
... most of them I added
... (in alphabetical order)
... and then dglazkov added components
... and bryan added server sent events
<anne>
<anne> ^^ meeting agenda
<MikeSmith> agenda:
ArtB: WebAppsSec has CORS on its
agenda for tomorrow morning
... they had allocated half an hour for LC CORS
<MikeSmith> Jonas Sicking has entered the fray
ArtB: 9:45-10:15
chaals: How many people are interested in CORS?
[ Quite a few hands rise ]
chaals: does anyone object to bringing them in here?
[ No objections ]
ACTION ArtB to talk to WebAppsSec about a joint slot
<trackbot> Created ACTION-644 - Talk to WebAppsSec about a joint slot for CORS LC [on Arthur Barstow - due 2012-05-08].
chaals: Welcome sicking
sicking: Jonas_Sicking, I'm the late jonas sicking, Mozilla. Not The Late Jonas Sicking, just late
chaals: Any other topics not in the wiki?
scheib: I spoke briefly with
ArtB
... I'm the editor of the Pointer Lock specification
... i'm new to editing
... it's just been added to the charter
ArtB: I think it would be useful
for new specs that have been added
... that people are starting to implement
chaals: I might put looking at
the Charter/Schedule/New Specs
... either at the beginning. or at the end
... any preference?
anne: it might be good to put them at the beginning
bryan: I have a conflict for 11-1
chaals: we won't put push
there
... going around the room
<MikeSmith> list of specs is at
chaals: we've got less gaps here (today), than there (tomorrow)
dglazkov: Shadow DOM, HTML Templates
chaals: I'll put bryan (Push/SMS)
to 2:30-3 (today)
... and Web Components for 11:15-12:30 (today)
dglazkov: my items is procedural ... gauging temperature
chaals: IndexedDB
ericu: we have a request from
someone from Google who can't be here today
... can it be tomorrow?
chaals: Yes we can
ArtB: i'd like a slot for Hixie
's 4 CRs
... where we are, can we get someone to fill in the gaps
... how do we manage future work
... v. getting to rec
chaals: "Hixie's hand-me-downs" 11:30-12:30 (tomorrow)
Travis: 10 minutes for DOM3 events/DOM4 from that slot?
chaals: is that going to be short?
anne: we had the longer one last time
Travis: it should be short
anne: we need 15 minutes for Full
Screen
... ArtB mentioned that
chaals: the Late Douglass
Schepers
... people who have not introduced yourselves
... please introduced yourselves
shepazu: Doug Schepers, W3C Team Contact, Member
tantek: Tantek Celik, Mozilla, Observer
hober: Ted O'Connor, Apple, Member
anne: Is the Stream API that's an
extension to XHR going anywhere?
... is the editor here?
gbillock: Greg Billock, Google, Observer
MikeSmith: please put the IME API
chaals: i'll try to leave space
for breaks
... how many people have read the new charter?
[ ~5 hands ]
chaals: 4 of us were lying
shepazu: I don't think chaals read it, and he wrote it
bryan: the link on the webapps page is to the old charter
<anne> charter:
Josh_Soref: the main webapps page is unusable
bryan: the w3c pages don't work well on iPads
shepazu: Action bryan to buy me an iPad
chaals: Action bryan to buy everyone an iPad
bryan: the style sheet is generally bad
chaals: who's driving the screen?
ArtB: I am
[ ArtB projects PubStatus ]
<MikeSmith> new WebApps charter
ArtB: we have over 700 people
subscribed to our list
... of those, only 30-40 people are really active
... i like to keep pub status accurate
glenn: I have a comment to shepazu
<plh> Agenda for today
glenn: it might be helpful to say XHR subsumes ....
shepazu: can we make emmandations?
plh: nope
chaals: CORS
... we'll look at this tomorrow
anne: I don't see how it's a plan
[ the label for CORS says "LC Period ends 1-May-2012" ]
anne: but the statement is
accurate
... there have been no comments raised
... there was one "we should design this differently"
... there was a comment about making it more performant on mobiles
... that was related to caching
chaals: do you expect a second version
anne: if we tinker with caching,
then we'd need a second version
... there's also From-Origin (the opposite of CORS)
... there are long term plans re: merging CORS + fetching
shepazu: you should talk about that in the slot
chaals: If you're speaking, you need to speak loud and to the center of the room
anne: we can't fix all the bugs
ArtB: so move to CR?
anne: there have been no LC comments
chaals: we expect to move to CR
in two weeks?
... Clipboard APIs and Events
... halvord stein is not here
... is anyone following that closely enough?
[ no ]
ArtB: do we know implementation status?
<tantek> One request for the new WebApps charter (starting July 1 2012 presumably) - please switch to using the W3C wiki (instead of webapps wg-only wiki) :
anne: there's
implementations
... but they have differences
rniwa: depending on platforms,
there are variations
... there are issues involving determining Same-Origin
... affecting what can/should be stripped
... it might be needed
chaals: so that's work in progress
anne: there's an attribute for secure usage?
chaals: CORS - testing
... and test facilitator, and test suite?
odinho: me
chaals: what's the status of the test suite?
odinho: for the test suite
... i've been reading through the tests that are there
... i've incorporated the things that are missing into Opera's Test Suite
... but i haven't gotten entirely through the WebKit tests
chaals: and that hasn't been sent back to the group
krisk: tests that are submitted
are a wide range
... we should go through them
sicking: we have a couple of
tests that are pretty big
... but they won't run anywhere else (they use "yield")
... would you like us to submit those
... a lot of the tests are expressed as data
... you could write a new wrapper around it
odinho: i've looked at it
chaals: I'm trying to get a
bird's eye view
... summary: odinho is looking at it others are working on it
... is there a test coordinator for Clipboard APIs
rniwa: I don't think so
... how would we test it?
<plh> I think it would be good for mozilla to submit what they have, and we figure out in the longer run how to modify them
rniwa: it can't be from the web
page
... so it has to be manual
shepazu: so you define manual tests
anne: there's a WATIR framework
chaals: don't sign up to do
something if you don't have the bandwidth for it
... DOM4
anne: the Plan statement (for
DOM4) isn't quite correct
... at some point we'll add new features
... better event registration
... extending ClassList
... varadic? arguments
chaals: do we push DOM4 through
and start DOM5
... what's the rush to get DOM4 finished
anne: you could push DOM4 through and work on DOM5
<Zakim> MikeSmith, you wanted to ask if somebody wants to give update about plans for Quota API spec
anne: but we don't have a way to manage forks (maintaining DOM4 and working on DOM5)
plh: we can't link to an unstable thing from a spec
chaals: that discussion is about
w3c process
... that's out of scope for this WG
anne: i know there are people
that want it
... but i have limited bandwidth
... we could publish dom4 now
... it's way better than dom3
chaals: Adrian Bateman, Microsoft, Member
Travis: only the WebPerf WG has requested to link to DOM4
plh: a bunch of specs want to
rniwa: there's demand to
deprecate DOM Mutation events (DOM3)
... i think mozilla is planning to unprefix the replacement
chaals: it sounds like it would
be good for the chairs to find someone with the bandwidth to
branch of DOM4 and stabalize it
... is that someone standing up to volunteer?
... thank you very much Tantek
ACTION ArtB to find someone to branch DOM4 and publish
<trackbot> Created ACTION-645 - Find someone to branch DOM4 and publish [on Arthur Barstow - due 2012-05-08].
anne: if you make the CR reqs loose, we can do it fairly quickly
ArtB: is anyone interested in helping with that task?
[ Silence ]
chaals: don't worry anne, we'll
come back and ask you again
... until you come up with the right answer, which is yes
Travis: ArtB, please show PubStatus wiki page
[ ArtB captures need to fork DOM4 for stable+publishing ]
<anne> WebApps Pub Status (on screen)
Travis: bugzilla database is the
prime spot for tracking (DOM3 Events)
... i think we should issue another LCWD
chaals: DOM Parsing + Serialization
anne: the HTML WG might or might not work on it
chaals: it's in our charter
PaulC: the CfC for HTMLWG:ISSUE-198 closes today
anne: in particular, if it
closes, it will be forked from the html
... and someone from microsoft will publish it
chaals: despite the fact that
it's in our charter, we don't know if it will happen in our
group
... is that right paulc?
... my sense was that we would do it in our group
anne: no, they wanted it in the html wg
PaulC: i'd have to do the
research
... i don't think HTMLWG:issue-198 speaks to where it would be done
ACTION chaals to talk to paulc about where Parsing+Serialization work is done
<trackbot> Created ACTION-646 - Talk to paulc about where Parsing+Serialization work is done [on Charles McCathieNevile - due 2012-05-08].
chaals: Element Traversal is
DONE
... File API
sicking: the pub status for File
API looks right
... we can possibly do it in Q2
chaals: do we expect Q3
... let's say we expect it in Q3
... directories and systems
ericu: that's all correct
chaals: From-Origin Header
anne: I don't think there's been
much uptake
... drop it, i guess
... i've addressed all the comments
... there haven't been other comments
... I don't think anyone implemented it
... the idea was to prevent people from using CORS in places for which it wasn't quite intended
... but they started doing that anyway
chaals: so that has no one to
take it forward
... does anyone want it?
... it's up for grabs
anne: i'm happy to continue
editing it
... but if no one is going to implement it, then there's not much point
chaals: let's start a CfC to
publish it as a note
... if that doesn't shake anyone out, then park it as a note
<ArtB> ACTION: Art start a CfC to stop work on From-Origin spec [recorded in]
<trackbot> Created ACTION-647 - Start a CfC to stop work on From-Origin spec [on Arthur Barstow - due 2012-05-08].
ACTION ArtB to start CfC to publish From-Origin as a note
<trackbot> Created ACTION-648 - Start CfC to publish From-Origin as a note [on Arthur Barstow - due 2012-05-08].
bryan: I understand technically
what it was intended to do
... and i understand it was a good idea
... but i'd like to understand how CORS stands if we don't have From-Origin
chaals: Full Screen
... do we have a test coordinator?
anne: no
chaals: ok, so we need one
WonSuk: WonSuk Lee, Samsung, Member
plh: From-Origin is in the
WebAppsSec Charter
... so we should talk to them
ArtB: i didn't think it was a joint item
plh: we can talk to them tomorrow
adrianba: Fullscreen...
... is it two specs?
... there's a CSS bit
tantek: it will be managed together
ArtB: how close is it to somewhere?
<ArtB> ACTION: Art start a CfC to publish a FPWD of Fullscreen spec; coordinate with CSS WG [recorded in]
<trackbot> Created ACTION-649 - Start a CfC to publish a FPWD of Fullscreen spec; coordinate with CSS WG [on Arthur Barstow - due 2012-05-08].
chaals: we expect a FPWD this
Q
... Gamepad
scheib: The Gamepad editor is
Scott Graham, from Google
... the draft has been stable for the last little while
... chrome is behind a flag
... I believe firefox is soon to ship without a flag
... i don't see anything blocking
plh: publish as LC?
shepazu: FPLC?
... it's kind of funny
chaals: you can do that
... full screen might do the same
<ArtB> ACTION: Art start CfC for FPWD + LCWD of Gamepad spec [recorded in]
<trackbot> Created ACTION-650 - Start CfC for FPWD + LCWD of Gamepad spec [on Arthur Barstow - due 2012-05-08].
shepazu: why don't we have a session to do them
chaals: Indexed DB
... we have a test suite
... it's on the agenda
... anything to say?
<anne> fwiw, Gamepad is not ready for LC
<anne> at least is not
sicking: i don't think there's much to do
chaals: IME?
... MikeSmith ?
<anne> e.g. GamepadEvent does not inherit from Event at the moment and does not define a constructor
MikeSmith: do you need a Test Facilitator?
chaals: yes, thanks
MikeSmith: i'm happy to do it
chaals: we need a FPWD
<ArtB> ACTION: Art start CfC to publish FPWD of IME spec [recorded in]
<trackbot> Created ACTION-651 - Start CfC to publish FPWD of IME spec [on Arthur Barstow - due 2012-05-08].
chaals: anyone following Java bindings for WebIDL?
Travis: i don't know anyone doing it
chaals: i used to
... pointer lock?
scheib: I'm the editor
chaals: do you have a test faciliator?
scheib: i don't know
chaals: it's someone who commits to getting tests
<ArtB> ACTION: Art start CfC for Pointer spec [recorded in]
<trackbot> Created ACTION-652 - Start CfC for Pointer spec [on Arthur Barstow - due 2012-05-08].
chaals: you can do it yourself
scheib: i'll probably do it
myself
... i'm not sure of the timeline
chaals: Progress Events
... waiting on implementations
anne: there's a test suite
... Ms2ger wrote tests that end up testing WebIDL
... which people get wrong
... the test suite doesn't test dispatch
... just the interface
chaals: status?
anne: when is Opera going to pass the test suite?
chaals: Quota
MikeSmith: I thought Kinuko Yasuda was working on it
chaals: and that doesn't have a
test facilitator
... looks like we need a lot of test facilitators
ArtB: yeah, a lot of holes
chaals: selectors
... it's waiting on me
... it's waiting on WebIDL
... as WebIDL is going to CR
... I think Selectors can go to PR
... the test facilitator should be me
plh: we should have a link to the interop report
chaals: expect an advancement to
PR to Q2
... then it blocks again until WebIDL moves forward
... Server Sent Events
ArtB: we published a LC last
week
... we have 3 weeks
... i think there was a comment last week
glenn: there was a comment about infinite reconnects
chaals: we have comments
... i think everyone's had the same issue
ArtB: the only tests i know of
are Opera's
... can you submit them?
odinho: ye-> Imagemator, we can
chaals: Shadow DOM
dglazkov: been working on
spec
... we have a test suite
... dominic has been doing them
... the spec is fairly stable
... i was going to ask about moving it to WD
chaals: the procedure for moving
to FPWD
... or LC
... is: as an editor, you write to the chairs and say "i think we're ready"
<ArtB> ACTION: Art start a CfC to publish a FPWD of Shadow DOM [recorded in]
<trackbot> Created ACTION-653 - Start a CfC to publish a FPWD of Shadow DOM) [on Arthur Barstow - due 2012-05-08].
chaals: we write to the group asking for CfC
ArtB: the thing about FPWD is
that it starts a call for IP exclusions
... it's good for the feature set to be defined at a high level
... so the ip guys can look at that
dglazkov: we're well past it
chaals: in that case, we should
[already] have a FPWD
... and we'll do that with you
dglazkov: it's well pas that point
chaals: URL
MikeSmith: looking at anne
anne: I'm working on
encodings
... adam was editing, then mike
ArtB: there's a warning from adam
MikeSmith: we need to look
through the tests
... next month we can look at it
... we could publish a FPWD now
... I can put it together
<ArtB> ACTION: Art start a CfC for FPWD of URL spec (Mike to not be lead Editor but will help to drive it) [recorded in]
<trackbot> Created ACTION-654 - Start a CfC for FPWD of URL spec (Mike to not be lead Editor but will help to drive it) [on Arthur Barstow - due 2012-05-08].
chaals: FPWD needs to lay out
what the thing does, which we're at
... Screen Orientation
... aka Screen Lock
... view orientation
ArtB: Mounir is working on it
[ plh, the Frenchman, properly pronounces his name, and asks how there could be a problem pronouncing it ]
sicking: i don't know
ACTION ArtB to follow up with mounir about status of Screen Orientation
<trackbot> Created ACTION-655 - Follow up with mounir about status of Screen Orientation [on Arthur Barstow - due 2012-05-08].
chaals: WebIDL
... Travis, look good?
Travis: i am the test facilitator, but i haven't facilitated
chaals: Web Intents
gbillock: we probably need a test
facilitator
... i'll sign up for that
Travis: we need a FPWD
gbillock: we'll talk about that this afternoon
chaals: Web Messaging
ArtB: in CR
chaals: as of this morning
... that's PostMessage
ArtB: according to caniuse.com, it has the most deployment
chaals: but no tests
shepazu: i don't think this is the right room to draw them from
<ArtB> ACTION: barstow find a Test Facilitator for Web Messaging CR [recorded in]
<trackbot> Created ACTION-656 - Find a Test Facilitator for Web Messaging CR [on Arthur Barstow - due 2012-05-08].
chaals: Web Sockets
... we need to finish CR/Test suite
krisk: MikeSmith helped get a
server up
... i think MikeSmith 's going to update one module
... but it seems to be going along pretty well
... tests are pretty complete
ArtB: so MikeSmith will update the module
chaals: run the Test Suite, ask for PR
ArtB: are you aware of implementations that pass everything?
krisk: we're pretty close
anne: there's a problem in Web
Sockets relating to Isolated Surrogates
... the spec requires throwing
... but preference is to replace
... i don't think it's tested by the test suite
glenn: there was discusston on isolated surrogates in public-script-coord
anne: it's related, but
[currently] it's not the same
... and it won't change
... spec requires throwing
... most want not throwing
adrianba: i thought we threw
anne: for consistency with XHR which doesn't throw
Josh_Soref: and web authors won't expect it to throw
krisk: i think we should talk about this in our Hixie specs slot
chaals: Web Storage
ArtB: i think there's a late DOM4
change
... which blocks Web Storage
... does anyone implement that?
krisk: I don't think anyone does
yet
... it's definitely blocked on that
ArtB: Yikes,
krisk: we should talk about that in the Hixie specs slot
chaals: Web Workers
ArtB: CR today
Travis: someone doing that should work on Web Messaging, since they're intertwined
anne: Web Workers has feedback that may require going back to LC
chaals: that's right
... into that slot too
... XBL2
... anyone love that enough to follow up?
ArtB: wait for sicking
chaals: my impression is that it's going to be parked
anne: I think smaug is the only person who cares
chaals: XHR
anne: 2 does'nt existr
s/does'nt existr/doesn't exist/
anne: I wrote a test suite
once
... but no one cared
... i tried to find someone, and odinho ...
odinho: i had an intern
chaals: making an intern isn't a
good idea
... since they disappear
... we got a request from Mozilla when we rechartered
... to look at web app packaging
... sort of a JSON version of Widget Packaging
... and we have a potential draft starting point
... do you, tantek, have any further idea on its status?
tantek: is this Manifests?
chaals: yes
shepazu: yes
... and do you know who that is?
tantek: i think that was Michael
Hanson
... what's the input you are requesting
chaals: it's in our charter
... mozilla has a spec and someone supposedly into it
... do they have someone to do the work
... and you can say i don't know
tantek: i don't know
chaals: the answer is "we don't know"
ACTION shepazu to contact dbaron (Mozilla AC), cc tantek
<trackbot> Created ACTION-657 - Contact dbaron (Mozilla AC), cc tantek [on Doug Schepers - due 2012-05-08].
[ Break for 15 minutes ]
chaals: sicking wasn't here
... XBL2 should be parked as a WG Note
sicking: if things go south, can we bring it back?
chaals: yes, it's in the charter
plh: is there a lot of work?
shepazu: do we do a CfC?
chaals: I volunteer to update the status of the document
<ArtB> ACTION: Barstow start CfC to create a WG Note for XBL2 (and Chaals will do the work) [recorded in]
<trackbot> Created ACTION-658 - Start CfC to create a WG Note for XBL2 (and Chaals will do the work) [on Arthur Barstow - due 2012-05-08].
chaals: where are we?
dglazkov: lots of work has been
done since last TPAC
... the main feedback at TPAC
... was we brought a lot of stuff
... but it was a bag of goods
... rather than a coherent whole
... we needed a declarative form
... where is the spec
... confinement/isolation
... lightweight/functional
... with the help of shepazu, we got things we needed
... it takes more work to get a component in webkit bugzilla
<shepazu> s/help of shepazu/help of ArtB and shepazu/
<shepazu> Web Components Explained
<dglazkov>
s|-> Shadow DOM ED|
dglazkov: we talked to a lot of
people
... i tried to come up with as solid of a spec as i could
... simultaneously we developed this in WebKit
... behind a flag, and only available in Developer builds
... i don't want a repeat of WebSQL
... this helped inform ourselves about things
... it helped flush out things
... the basis of the spec was the XBL2 part
... there has been a lot of things added
... a lot of that is precision of shadow dom
... htmly things
... guided by our implementation
... today the spec is in pretty good shape
... we have a small bug list
<dglazkov>
s|-> Shadow DOM Bug Tree|
dglazkov: some are small things,
"not MUSTy enough"
... there's one (largish) addition we're contemplating
... bug 15818
s|s/does'nt existr/doesn't exist/||
s|2 does'nt existr|2 doesn't exist|
[ XXX scribe suspects that the scribe script has reached its breaking point ]
dglazkov: i also worked on the
HTML Templates Spec
... an idea
... we have the templates element (see the Explainer doc)
... what makes it "interesting" is that it requires HTML Parser modifications
... I wrote the spec and WebKit modifications
... to see how it was received
... several people voiced Cautious Concern
... Hsivonen and Abarth
... the two parser people whose brain's we picked
... they James Graham from Opera wasn't very happy either
... there's still a need for an extra mode (?)
... the <template> tag has 2 modes
... "declare anything"
... "declare anywhere"
... we're going to drop "declare anywhere", we don't need it for Web Components
... "declare anything" we're going to keep, since it seems useful
Josh_Soref: you're going to drop anywhere, and you're keeping anything?
[ Laughter ]
dglazkov: right
... Custom Elements is the next spec in line
... i'm planning to start working on it next week
... i spent the last couple of weeks researching the problem space
... i wrote a poly-fill
... if you have Shadow DOM
<dglazkov>
s|-> Polyfill (using Web Components)|
dglazkov: in Custom Elements, one
of the new thing is fictional syntax
... these items aren't controversial
... which is a big issue
... don't want Synchronous
... but Asynchronous has issues: When am I a component/When am I unknown?
... instantiating a Component
... has interesting effects
... maybe i'd like to be able to drop into user script
... but if i'm instantiating from Parser, that maybe isn't a good idea
... more mundane issue
... custom elements are DOM Objects
... with an extended prototype chain
... i don't know how to spec this
... since it creates a dependency on ECMAScript
anne: what exactly?
dglazkov: Custom Elements extend the Prototype Chain
<plh> partial interface?
dglazkov: I don't want to create a dependency on ECMAScript
anne: why is creating a dependency on ECMAScript a problem?
plh: it seems like you're creating a partial interface
dglazkov: right, but it's arbitrary
anne: you should talk to
cameron
... he'll probably say that you have to define it yourself in prose
s/cameron/heycam/
scribe: i'll look into it next
week, after this session
... another thing, relating to elements
... we came to TPAC with custom tags
... there was much grievance
s/custom tags/custom tags: x-slider/
scribe: we switched to use
elements
... it's a magical element, you can only set it once
... button is fancy button
... during instantiation, you have to specify it
... eric myer, of myerweb
<dglazkov>
scribe: i want to bring it up,
i'm feeling very ambivalent
... i'd like to figure out who would be the right person or forum
... i posted to webapps
... and crickets chimed in
tantek: isn't this more HTML WG than WebApps WG?
dglazkov: that's what i'd like to know
shepazu: about inheriting from
Button / Slider / Calendar
... there's been talk in the past about having psuedo elements
... say for CSS
... say for the slider's thumb
dglazkov: we looked at the css
variables spec
... the spec says css variables inherit into shadow dom
shepazu: that calls out the need for pseudo elements
dglazkov: with css variables, you don't
shepazu: i don't understand yet
<dglazkov>
dglazkov: i know the MS guys did
pseudo elements
... and we have them in WebKit
... and we hate them
... what browsers use pseudo elements to style bits of things
... i think you use pseudo classes
sicking: what do we use to style
the placeholder
... or input elements?
tantek: it's a psuedo class
dglazkov: in any declarative
paradigm
... if you're saying button, or div is shelf
... you're defining a subclass
... and when you instantiate it
... you don't say, it's a shelf, oh, it's also a div
shepazu: could there be another thing other than localname?
dglazkov: i don't want to mess with
tantek: could you consider defining it as a mixin rather than a subclass?
dglazkov: that's decorators
tantek: why not have everything be a mixin?
dglazkov: when you're dealing
with everything as an api
... you want to ensure things are always the same
... you don't want a style recalculation to cause your object to lose its decorator/api?
tantek: that's done through css?
dglazkov: well, decorators are
done through css
... and then there's the moving it out of the tree
tantek: well like the class= attribute
dglazkov: but then you can have
"spooky action at a distance"
... if you change the class name
... what happens to its state?
tantek: that invariant could be maintained
dglazkov: i think that's
possible
... when the developer of a component
... relies on it to be a button
... if you want to have multiple things as a tree
... that's definitely possible
sicking: it introduces
complexity
... roughly Multiple Inheritance in C++
... it's very powerful, but very complicated
dglazkov: it's just extending a
prototype chain
... moving down the chain
tantek: it seems like reinventing Java Class Hierarchies
dglazkov: it's not
reinventing
... just naturalizing JS inheritance into the DOM
s/Present+ glenn//
shepazu: in SVG we have the USE
ELEMENT
... you have a use element, you reference an elemnt
s/elemnt/element/
scribe: and you get another
instance
... but you can add attributes to the copy
... so you can have a plain star
... and then style one copy to green
... or red
... at TPAC, you said "No"
... is that still the answer?
dglazkov: i think having the
shadow tree with separate style
... has dragons
shepazu: so we could reuse it?
dglazkov: you will lose some of
the invariants that the SVG spec provides
... but the way Shadow DOM is defined
s/Present+ PaulKinlan//
shepazu: i think all that SVG
needs to keep
... is the way to style each instance separately
dglazkov: that's possible
today
... SVG uses Shadow DOM in a very limited way
... it doesn't have insertion points
... if you want to extend to that
... it's OMG
ArtB: from a procedural
perspective
... we agreed to a CfC for Shadow DOM
... what about Template Spec?
dglazkov: I think Template Spec,
as it is right now, we're going to kill it
... and we'll pursue it in HTML
... in "Custom Elements"
rafaelw: I think that's fine
ArtB: so we're not going to publish this
dglazkov: we're probably not going to publish it
rafaelw: one of the more salient
issues of the template element spec
... is two things
... what mechanism creates inertness
... and where do those elements reside?
... on the ML, there was a propose that they be lifted out
... there was an item about it being objectionable
... if we can sort out that
... i think that's the most useful+controversial part
... if we can get consensus on that, i think we can get progress
[ Time Check ]
anne: hsivonen, abarth, jgraham
are not here
... hober is here
... basically you lock out XHTML uses of templates
hober: all things being equal, we shouldn't introduce more divergence between HTML and XHTML
dglazkov: we have a Mexican
Standoff
... between should we hurt XHTML
... vs. should we introduce something very non performant
<MikeSmith> cough TAG cough
ojan: Ojan, Chrome, Member
<hober> s/Chrome/Google/
<shepazu> Alex Komoroske
komoroske: Alex Komoroske, Google, Observer
anne: elements inserted based on
the template element
... you don't want them to be in
... because they cost resources
... and are exposed by QuerySelectAll/etc
dglazkov: can we modify the XHTMLParser?
anne: I have a draft that tries
to modify XHTML parsing
... but it hasn't ...
dglazkov: can we CfC dropping XHTML?
[ laughter ]
s/XHTML/XML/
chaals: there's a proposal to make XML a kinder gentler beast
anne: there's a big leap for
moving things into a detached dom tree
... it's cool and works for me
<MikeSmith> need a magic namespace
anne: but i don't think it would fly for others
tross_: technically, it's
inserted into the tree
... and then removed before anyone looks
anne: the people who will care is
the TAG
... and they want a document served as HTML or XHTML to behave the same
PaulC: More specifically, the Director cares
chaals: changing XML is like
changing the W3C Patent Policy
... but it isn't written in Stone
... it's on a wiki somewhere
anne: we can say "we want to do
this in html"
... we don't think it will work in xml
PaulC: make a comment on the html wg's document
anne: that's done, there's a
bug
... but that won't get TAG attention until it ships
ojan: no one has an alternative
proposal that's technically feasible
... every other proposal has serious technical problems
anne: if you don't address hiding from DOM Query
ojan: and every future api that
might do a network request or live action request
... needs to be template aware
anne: in effect everyone needs to be aware
shepazu: can we introduce the
feature and say "does not work with xml"
... and let them solve it?
anne: we already have that, it's called <noscript>
dglazkov: we could also say we require an esoteric changes
sicking: if i were to do this in
Gecko
... i wouldn't touch Expat
... i'd change how the tree constructor handles events from Expat
... i don't think we need to violate XML
... wasn't there a proposal to stick things into <script> tags?
dglazkov: there was
sicking: although that also doesn't work in XML
chaals: we can say "hey world,
we're going to upset your apple cart/orchard"
... and see if they care
MikeSmith: +1
anne: you definitely violate the spirit
chaals: there's no question that it makes a mess
tantek: if you're using XML, can't you use XSLT?
dglazkov: resolution: we'll try to spec it as "doesn't work in XML"
tantek: I don't think it's Apple Specific
[ Laughter ]
ArtB: dglazkov, have you thought about publishing the Explainer?
dglazkov: i thought about
it
... but it seems like a sequencing issue
chaals: it makes sense to do it
dglazkov: i can reformat it
... update it (for Shadow DOM
... and then publish
shepazu: i can help
dglazkov: if you guys have
time
... please dig into Shadow DOM and help me eliminate non MUSTy stuff
anne: there might be a lot of those things
<ArtB> ACTION: barstow start a CfC to publish a FPWD of Web Components Explainer (when an ED with TR template is available) [recorded in]
<trackbot> Created ACTION-659 - Start a CfC to publish a FPWD of Web Components Explainer (when an ED with TR template is available) [on Arthur Barstow - due 2012-05-08].
dglazkov: I do spend a lot of
time staring at the spec
... but it's hard for the person who wrote something to see its faults
... any more questions?
chaals: thank you dglazkov
[ Applause ]
chaals: let's have an hour for lunch. Resume at 1:30pm
<dglazkov>
s/Present+ Tony Ross/Present+ Tony_Ross/
Paul_Kinlan: Hi, turns out I'm a
member
... as of a couple of hours ago
... we're going to talk about web intents
... i want to give you a demo
... i don't know how much you know about what we're trying to achieve
[ We try to get projector projecting ]
[ Lights go out ]
[ chaals: Nope, that's not the one ]
Paul_Kinlan: there are a couple
of UCs where it's very hard to build integrations
... with third parties
... the whole point is that even though we have widgets
... there's no way to make integrations
... the biggest common action is Share
... the next is Bookmark
... but things people want to do:
... Edit Documents, Pick resources
... we want to make this easy
... let someone pick something from their cloud storage
[ Projector had temporarily done something positive ]
[ Projector failed ]
[ adrianba: I just pressed random things until it worked ]
[ shepazu: that's how they do most things ]
[ Laughter ]
[ chaals: alright, do the interpretive dance ]
Paul_Kinlan: there are a couple
of common actions
... that we think are core to the web
... users do common things:
... share data
... save physical data to things (like Drop Box)
... they pick data from things (Word Document, Image, Video)
... they could pick from Flickr, Drop Box, YouTube
... one of the things I was going to show (in the demo) was Imagemator
... what we would see on the screen is a big button that says "Choose image"
... the browser knows which services you use
... the demo would let you pick from Picasa
... Picasa doesn't use Intents, but it has a public API which lets you do it
... You can do server to server work
... but we're starting to see purely client side applications
... a lot of their functionality is built on the client
... when you have applications sharing lots of data (Video, ...)
... the data might be local to your network
... or proximate to your network
... we'd like to let these two applications talk directly
... like a bridge
... we have demos that do both
... need a network, or client side resolution
... where we literally process a blob
... the demo itself doesn't do much work
... it finds a service that does editing
... if the browser doesn't have a service for a thing
... the browser can use an indexing service (store, search engine)
... to discover a service
... In the demo, you'd press edit
... you don't have anything installed
... the Chrome Store would be searched
<anne> link to the demo?
Paul_Kinlan: you'd pick Picnik
s|anne link to demo?|-> Web Intents Demos|
s|link to demo?|-> Web Intents Demos|
<anne> ah
DanD: have you looked into a
scenario
... where the application developer wants to choose a certain instance
... say I'm a photo sharing service and I want to choose Picnik
... I want to do it in a way that makes sense
... not to choose a default intent
... but a specific case
Paul_Kinlan: We've talked about
that in the TF
... an "Explicit Intent"
... say you're photoshop.com
... You want to be open to discovery
... however you've got specific integration with
DanD: and the user should be able to override in the end?
gbillock: explicit intents, it's
unclear whether they will be overridable
... explicit intents let web content make the picker
... and letting web developers use Web Intents for internal RPC
... the way that you could bring up a browser guaranteed redress proof UI
... is interesting
... we're hoping with experimentation we'll figure that out
chaals: the answer is that it should be overridable
shepazu: it should be up to the UA
chaals: if you have a local
installed application
... it should work
... say you don't want to use photoshop.com
... you want to use photoshop
gbillock: that's totally within
scope
... we definitely want to be able to build a bridge between web apps and local apps
... for some embedders like OSs
... like Android
... there ought to be a way to create a mapping
... or Windows 8's "Contracts"
... you shouldn't be able to just go from a web app to photoshop
... but also from photoshop to say save to your dropbox
Paul_Kinlan: we also want to be able to do picking
s/picking/viewing/
scribe: it should be easy to do Open With
gbillock: currently the spec is
focused on what you do for the Web Page
... there's language to say that this isn't the only way
... saying that there should be a local execution model
... but that's left up to the UA
chaals: how do you go with AppCache
Paul_Kinlan: we've done a lot of
experimentation with AppCache
... we've experimented with RPC/RPH
... it's hard to get things to work with AppCache'd content
chaals: because it sucks
<chaals> s/chaals: because it sucks//
Paul_Kinlan: most people use a
query string with RPH
... but Intents uses something different, so it could work
magnus: you said you could have a
UA
... that could download it using a search engine
... what happens while it's being retrieved
Paul_Kinlan: the implementation
in Chrome
... does the query using the web store (http)
... the API itself is Async
... the UA pops up the picker
... but the page isn't blocked
... if you have no networking
... then there might be no options for the user
... but how that works is up to the UA
... and because it's Async, that shouldn't affect the page
gbillock: the idea that a user
might be trapped with no options
... is definitely unappealing to developers
... one possibility is to let clients query to see if things are installed
... but that leads to fingerprinting
... that's a weak supercookie
... instead the direciton we're trying to go
s/direciton/direction/
scribe: is to let client
applications provide fallback suggestions
... that the UA can use if the picker would otherwise be empty
... instead of being empty, you might see DropBox or whatever
... our current experimental implementation uses the chrome web store
... so they have to be installable
... the end state we'd want to get io
... is to have a web for web pages to identify themselves as services
... we've been discussing that in the HTML WG
... do we have an <intent> tag
... or ...
... It looks like Hixie is most favorable to having an <intent> tag
... but combining RPC, RPH, <intent> together
... so they'd look the same for users
... giving us both Imperative and Declarative
... and the same User Facing appearance
anne: what Hixie said was quite
reasonable
... that still doesn't say how you identify an app
gbillock: Gmail would say use RPH
for mailto:
... and register <intent>
anne: on my web page, i have a
contact form
... and i have a send me an email link
<gbillock>
s|-> Web Intents specification|
gbillock: if you look at
3.1
... there'd be a services parameter
... in chrome
... the picker is a list of optional services
... the top having items the user has used
... possibly it would query the store
anne: if the developer provides
urls
... what do you show?
... not just the url?
gbillock: no, the page title +
favicon, probably
... or if we've processed it, something else
chaals: I want to go back to
overrides
... the UC will come from Accessibility
... if you made a request for an Explicit Intent
... it should be possible to pick something else
... if you pass off text, then anything can
... but if you pass word97 documents
... then there are some other things that can handle it
shepazu: obviously, if i have
something, i can describe it
... is there any other way to give information to the user?
gbillock: in the picker
itself?
... the client presents the initial messaging to the user
shepazu: is there a way to give a description of the requested action?
<shepazu> s/description/human-readable description/
gbillock: the UA has the complete
Intent call bundled up
... what the action is
... what the type is
... any extra data
... the UA has to use that
... it can definitely customize itself
... to say "which of these services do you want to use to edit a contact"
... we got a bit of feedback from UI people to provide per action wording
[ We have a projected Cosmos ]
gbillock: Selection refers to the
picker
... since the UA is in charge of that
... the UA can be arbitrarily sophisticated in terms of coaching the User
<chaals> hi
s|hi|-> Imagemator|
Paul_Kinlan: this is Imagemator
[ Clicks Choose Image ]
Paul_Kinlan: these are the user's
services
... and these are the store services
... I'll use ... CloudFilePicker.com
... this is Picasa
... it isn't direct, it's via the Picasa API
[ Picks a face with two phones pasted ]
[ Laughter ]
[ Clicks Edit ]
scribe: I'll pick Mememator, i
haven't installed it
... this has no server to server logic
... eventually this will work offline
... I'll pick Inspirationmator
[ Enters Practice Demos; They work ]
Paul_Kinlan: The UA passes the data around
glenn: does this pass data around retain tainting?
Paul_Kinlan: not in this
case
... in here, the canvas isn't tainting
... I want to show two actions "Share blob" and "Share page"
... Web Intents can handle both
adrianba: who decides what to be
shared?
... in the Windows 8 contracts, we publish different options
... the link, the link with metadata, the html
shepazu: like clipboard
Paul_Kinlan: like clipboard
... right now, the application invokes one type
... saying i'm invoking the Image
... the link would be the physical image, and not a reference
gbillock: there are two
strings
... for match making
... the actions and the type
... the actions must match exactly
... and types must match, or if they're mime types must overlap
... if twitter knows to share Images, Links, or Videos
... then it would register for 3 distinct things
... so you get a footprint over all the things you understand
... that's our theory right now
Paul_Kinlan: the client
application says it will do one thing
... your application will say it can support three types of data
... we might need to change it so you can offer one of two things as a request
adrianba: there's a problem where
you have multiple datatypes with precedence
... but it seems that like now the onus is on the user right now
... i know that twitter can take: page link, page link+title
Paul_Kinlan: i think the onus is on the Client app to pick sensible types
adrianba: as a user of the source
app
... i have to know which button to pick to trigger to the destination app i have
Paul_Kinlan: right now, our apps
have one definitive type/action
... share was kind of interesting
... because very few apps share physical data
... most share data
shepazu: i agree with adrianba,
share is ambiguous
... look at facebook
... at one point you only shared a link
... now it also embeds some of the content
gbillock: the thing starting the
activity is the client
... and the thing performing is the service
<Zakim> shepazu, you wanted to ask about "inlining services" into a page with intents
shepazu: i think there will need to be a negotation
gbillock: the question of how
complicated the handshake should be
... is obviously
... in order for this to work, the ecosystem has to agree
... Facebook/G+ occasionally figure out what you meant
... with that in mind, we've erred on the side of no negotation
... expand what you except
anne: i think for most user how to pick a service will be complicated enough
gbillock: we decided to burden the service to enumerate what it supports
anne: maybe you should have a way
for the client to offer multiple at a time
... and let the service indicate its preferred payload
Josh_Soref: this is not "Paste Special"
shepazu: the user is stupid
anne: the user has better things to do
<Zakim> timeless, you wanted to talk about Share v. Save
<chaals> scribe: chaals
<shepazu> s/stupid/stupid (sarcastically)/
timeless: I made a trip and tried
to sign in. I could print a PDF or follow a link. I would like
to decide to send it somewhere.
... you were expecting me to send it via a sharing service, but I want to save it somewhere and then use that to do my sharing.
gbillock: You want to be able to translate the intents?
timeless: I am saying they are the same thing
Paul_Kinlan: I don't have an
answer - there are different things that people expect from
what they see.
... I don't think we don't want to fire two intents, or people will end up publishing washing lists of services that do everything.
timeless: fallback is to have
trasnlator intents. Doable and I want to make it easy - but I
see share and save as the same.
... I can print to my device, rather than on paper. It is really a save, but as far as the computer is concerned it is a print.
Paul_Kinlan: Share was a broadcast, save was putting it somewhere. I can see the mental models behind this, but we have to work on this
gbillock: The API doesn't spell
out the verbs. It is an invocation of delivery leaving things
open for usage to coalesce.
... reason common verbs are useful is that they give a way t develop a good expectation to agree on what you are trying to do.
... There are edge cases which are hard to think about - should a kindle support print and share and save?
<timeless> s/way t/way to/
gbillock: we're waiting to see what happens with usage - what emergent verbs there are.
kamos: we are waiting to get feedback.
shepazu: In your demo you open a tab for the events. It might be interesting to be able to load a service inline on a page...
Paul_Kinlan: we have two
dispositions. All these demos use new tab, for transitory
implementation motivations (bugs)
... there is an inline disposition that should be able to do that.
... let's you see the context, it is relatively unspoofable, it is an area we have been wary of.
s/unspoo/spoo/
scribe: we weren't confident that we could make it secure.
shepazu: can't you have a UI
option where the user gets to choose how it appears?
... eg in maps I want to have something within a page.
Paul_Kinlan: we want to xplore it but haven't.
gbillock: the obstacle is that
the service has to provide an iframeable interface, which is
subject to attack and we haven't figured out how to solve that
yet.
... we are expecting a proposal from someone so we will see what happens.
<Zakim> adrianba, you wanted to talk about example of sharing a page
adrianba: wanted to give an
example from windows. We have contracts, and share contract is
one of them. The browser supports the idea of sharing a page.
User decides to share a page.
... browser is a client in web intents terms. Can share link, a link+metadata, or HTML snippet.
... when I choose share, windows finds services that supports one of those formats. Twitter app might take links+data, a bookmark does something similar, email might use the full HTML snippet, ...
... we allow any service that responds to a type to appear. User doesn't have to think about the options.
... sounds like your model is the service say it can take one of those threee.
Paul_Kinlan: We ahve a model
where you can share a link. Once we have that we can go fetch
more detail, and put it in the metadata pat of the
intent.
... you have a link plus extra metadata. Have to think about service applications - they can ignore data, read it if it is there.
... not all clients will share all data. In Android services don't populate metadata consistently.
<timeless> s/Present+ PaulKinlan//
Paul_Kinlan: using name as a URL
was based on describing a particular experience. Tell people
how you do it, what to populate the data with. Was going to be
a lot looser on definition, with people using URL as value for
where the information will come from.
... both client and server would choose what they send/receive.
adrianba: feels a lot less predictable about what the user is going to receive
<timeless> s|s/Present+ PaulKinlan//||
adrianba: what is the URL for, what can I do with that?
anne: doesn't the service also register types it accepts? I think you have teh same system.
adrianba: if I ublish a url of something with an image, do I mean the page or the image on it?
anne: thinki it makes more sense to send both.
gbillock: one way to do both is
an option we discarded (can reconsider). To integrate with
types, we intend that it be possible to match a microdata type
with complete schema capability...
... contact might be name+phone, or might have a lot more data there.
... idea is that user has a mental image of the service they are using.
... and so builds expectatoions of what is going to happen.
... There is flexibility in terms of how much payload is available to fill in for the service.
... weakness and strength.
... if your phone accepts a contact with no phone number., that seems wrong
shepazu: Could you use this across multiple modalities?
gbillock: We envisage the user agent being able to do things like use NFC to send stuff...
DanD: Who is in control of selecting the directory of services?
gbillock: User Agent. Services the user has installed that meet the required intent.
DanD: who provides the list of
options for what to install?
... on mine it is meaningful - it gives me information about where the suggestions are coming from.
... good this is under control of the browser for sense of trust, user needs to know where the browser is going.
... side effect of that control limits discovery of other services which may be an issue.
... (vendor lock-in...)
gbillock: Think client will be able to attach suggestions.
DanD: more appropriate for app developer to recommend the directory, rather than having the user search the web. But you have to put the destination for searching into the user experience
gbillock: this is a stock UI for inline installs. if there are suggestions from teh client side they show differently. Being able to attribute stuff comprehensibly matters...
<timeless> s/teh/the/
DanD: deja vu here - this is
uddi/wsdl/etc again...
... there were some good developments done there, so looking for the lessons there is a good idea.
<Zakim> tantek, you wanted to ask how broad is the scope of intents and cross-application services, e.g. some examples discussed seem similar to OpenDoc/OLE, especially in local client-app
tantek: deoms are awesome. scope is broader than I ahd understood. How broad is the scope intended to be?
gbillock: spec is 'how pages
invoke intents or get them delivered'
... leaves it up to the browser
tantek: web apps, client apps,
installed web apps are all in scope?
... that page has all the ability of HTML to send data anywhere, on the web or locally.
... including native apps?
gbillock: in principle yes. we haven't done that yet, but it is in scope.
<timeless> s/ordinho/odinho/
kam: scoped to websites, but could do this if there is demand.
Paul_Kinlan: DAP is interested in this. We have been focused on webapp interactions. UA can provide a bridge to add native apps.
tantek: do you know about opendoc
and ola?
... systems for applications doing this. Have you looked at that?
gbillock: nope.
... there is IPR in that area that should be looked at.
... (I know because I did some of it)
<Zakim> shepazu, you wanted to ask about a site registering itself as a service
shepazu: I am on flickr, it wants to tell me it can be a picker service. Is there something that lets them put something on their page so I can register it when I go there?
gbillock: yep.
<timeless> s/I ahd/I had/
<tantek> for the minutes - s/ola/OLE
gbillock: right now we have experimental stuff, but yes we want to be able to do that through declarative syntax for the page.
<timeless> s|for the minutes - s/ola/OLE||
<timeless> s/ola/OLE/
shepazu: if i share stuff with twitter, can I make that a default rather than picking every time?
gbillock: spec leaves that to user agent, we expect that to be possible.
timeless: having something is important to avoid security issues - you don't want a spamming site to get your twitter
shepazu: there should be a user involvement to make sure
gbillock: there is.
<Zakim> timeless, you wanted to note WAI concerns and Portability/Modality concerns
timeless: if the client page is
making a request and can force a directry that doesn't work for
my device, or have an accessibility requirement for specialist
services, or want a different language,
... the client might not have the right answer for the user.
DanD: we already have scripts taht pick stuff...
timeless: right, but they are not necessarily useful for a new device.
<timeless> s/taht/that/
DanD: agree there may be an incompatibility. Would rather have app eveloper test and verify than have the user agent assume the thing will work.
<timeless> s/eveloper/developer/
<timeless> [ Break until 3:30 ]
<plh> Current group participants
<timeless> chaals: we have an item in the Charter for Server Sent Events
<timeless> ... the to things people have come up with are Push SMS stuff
<timeless> ... and a notification that can wake up / remotely start an app (web page)
<timeless> ... i'll hand the floor to bryan
[Yosuke Funahashi introduces himself - co-chair of TV/Web IG]
<timeless> yosuke: Yosuke Funahashi
<timeless> s|Funahashi|Funahashi, co-chair of TV/Web IG|
<timeless> bryan: I've taken the UCs and broken them into a set of
<ArtB> UCs:
<timeless> ... more discrete things which i'll call proto requirements/ideas
<timeless> s|UCs:-> Push UCs|
<timeless> ... there's a link to a w3-ified draft from within OMA
<ArtB> Draft Bryan mentioned:
<timeless> ... it doesn't address all of the requirements
<timeless> s|Draft Bryan mentioned:-> EventSource Push (Draft)|
<timeless> ... I've built this, and have a demo (which I won't try to show today)
<timeless> ... and there will be a social network demo called "Mobile Social Networking"
<timeless> ... at XXX
<timeless> ... we noticed that XMPP connections burn battery real fast
<timeless> ... I want to get notifications of things that are really asynchronous
<timeless> ... e.g. an auction watcher
<timeless> ... doesn't want to keep an application open
<timeless> ... until recently, you couldn't run a browser in the background on mobile devices
<timeless> ... the next UC is a WebRTC client
<timeless> ... the phone application/dialer
<timeless> ... it doesn't take over the screen until it needs to
<timeless> ... we need some way to register for wake up events
<timeless> ... we were looking for a way that was more seemless
<timeless> ... @ TPAC:WebApps last year
<timeless> ... there was a request that things not be so specific
<timeless> ... my proposal was based on my experience w/ SMS/OMA Push
<timeless> ... but we need to create a mapping between text-eventstream and these other things
<timeless> ... I ran into an issue involving blank lines
<timeless> ... Maybe we end up building on a processing model
<timeless> [ bryan is reading through the Push "Derived Requirements" section ]
<timeless> bryan: there needs to be a way to provide filters
<timeless> ... the ability to deliver information to a web app before it shows a UI
<timeless> ... my draft proposal integrates CORS
<timeless> s/integrates/incorporates/
<timeless> ... to apply the browser security model
s/Push SMS/Push notification/
<Zakim> chaals, you wanted to ask where we go with this now...
<timeless> chaals: when this came into the charter
<timeless> ... i'm not sure if the people who wanted it are here
<timeless> [ sicking raises his hand ]
<timeless> chaals: you have a draft idea of a spec
<timeless> bryan: that may be fairly localized in application
<timeless> chaals: do you have this in web apps space?
<timeless> bryan: not yet
<timeless> chaals: so you're planning to edit this
<timeless> bryan: i could definitely support that
<timeless> ... i'm looking for expert input
<timeless> chaals: next step is to put it into w3 space
<timeless> ... and put it in the list of work items
<timeless> bryan: i'm hoping to have a conversation on things
<timeless> chaals: sure, but you start with an ED
<timeless> bryan: sure
<timeless> sicking: "We"
<timeless> ... (loosely)
<sicking>
<timeless> ... also have a draft proposal
<timeless> sicking: it doesn't cover everything from the proposal
<timeless> ... we could add things
<timeless> ... an application can say "i want to be able to send push notifications to the browser"
<timeless> ... the user agent allows the user to authorize that
<timeless> ... and if authorized, a URL is made available to the application
<timeless> ... and then it can use whichever applicable means to send messages back to the UA/page
<timeless> ... it should be integratable with Apple's push protocol
<timeless> ... currently you can't deliver that message to a particular page
<timeless> ... it shows up on the screen
<timeless> ... but when the user clicks on it, it goes to a certain page
<timeless> ... we could let pages say they don't want things on screen
<timeless> bryan: it sounds like OMAPush
<timeless> ... service indication
<timeless> ... (pre 2000)
<timeless> ... a text message and a url
<timeless> ... that wasn't directed to an application
<timeless> ... We added a way for an application to listen directly
<timeless> ... the key to OMAPush is that it uses Tokenization
<timeless> ... in 4 SMS payloads, you get up to 2k of content
<timeless> ... which isn't achievable without tokenization
<timeless> chaals: so it sounds like we have two sort of half starting points
<timeless> ... going into the same direction
<timeless> ... so the action is to look at them together
<timeless> bryan: i can look at mozilla's draft
<timeless> ... there's interest in integrating Apple's Push Notification
<timeless> ... and C2DM
<timeless> ... (Google's)
<timeless> ... that's where it stands
<timeless> ACTION bryan to look at proposals and start editing
<trackbot> Created ACTION-660 - Look at proposals and start editing [on Bryan Sullivan - due 2012-05-08].
<timeless> magnus: the proposal is to extend Server Sent Events
<timeless> ... with event streams
<timeless> ... but you're not limiting to that
<timeless> chaals: we're not limiting to that
<timeless> bryan: it may, but i found hoops, websockets may be better
<timeless> DanD: I'm a member of WebRTC
<timeless> ... this came up as a requirement
<timeless> ... it got escalated to WebApps
<timeless> ... we did some analysis
<tantek> <aside> chaals, follow-up from your question about Application Manifest, we (Mozilla) do have someone working on a spec, and are iterating in public with intent to submit to Web Apps WG for inclusion/publication: cc:sheapzu,sicking </aside>
<timeless> ... it's nice to have
<timeless> ... but there are emerging technologies which will make it a necessary feature
[reply to your aside: Cool. We chairs are waiting for that :) ]
<timeless> bryan: in this draft, you'll see some text examples
<timeless> ... i'll send a link to the github which has a demo
<timeless> Arnaud: is there a speed requirement?
<timeless> bryan: i haven't seen any request for a service delivery deadline
<timeless> ... things tend to happen within a second or two
<timeless> Arnaud: if you use SMS as a bearer
<timeless> ... it can be slow
<timeless> sicking: Apple's has no promise of delivery at all
<timeless> bryan: it's best effort
<timeless> chaals: where is Mr. Arun?
<timeless> sicking: this is a side project for arun
<timeless> ... he hasn't been able to work on this for a while
<timeless> ... he did a spurt of editing LC feedback into the spec
<timeless> ... i have to work off memory of the outstanding issues
<timeless> ... the big one is One-Time-Only
<timeless> ... and revoking
->--- File API bugz
<timeless> sicking: I think there was something else
<timeless> adrianba: there was the Close thing
<timeless> sicking: i think Close has the same problem space as Revoke
<timeless> adrianba: how concrete do we have to be
<timeless> ... and how interoperable do we need to be
<timeless> ... in IE we have a behavior where something may be cached in the decoded Image cache
<timeless> sicking: i suspect we'll want to define those things
<timeless> ... I suspect we'll be done with the File spec
<timeless> ... before we have those cases done for Images
<timeless> ... I suspect that long term we'll want to and require behaviors
<Zakim> chaals, you wanted to generalise Adrian's question onto tomorrow's agenda
<timeless> chaals: can we bake a stable version of the spec
<timeless> ... that gives a useful stable reference
<timeless> ... while we work forward
<timeless> adrianba: there's a difference
<timeless> ... between is it valuable to have specs that are roughly stable and useful
<timeless> ... and there's a part of a spec where there are so many variations based on underlying platforms
<timeless> ... saying for those things maybe we don't have to specify them maybe ever
<timeless> sicking: i suspect we'll want to define this
<timeless> ... i suspect for image cache, we probably have the same issue
<timeless> ... and if you hadn't brought it up
<timeless> ... we may not have tested it
<timeless> adrianba: web developers care
<timeless> ... and they care when it breaks them
s/can we bake/I would like to take this point out of this discussion and put on the agenda tomorrow whether we can/
<timeless> ericu: they care if we underspecify it
<timeless> ... and it works in one browser and breaks in other browsers
<timeless> sicking: it'll affect every place that uses urls, and every place that reads out of blobs/files
<timeless> ... there are certain things we should define
<timeless> ... there are going to be lots of things we're going to miss
<timeless> ... some of these things should be specified outside the File API spec
<timeless> ... some i suspect we'll get to eventually
<timeless> shepazu: if there are several contentious issues
<timeless> ... and some that will be tricky to do
<timeless> ... maybe we should bring on an additional editor
<timeless> chaals: this isn't an editor issue
<timeless> sicking: it isn't an issue of the File API spec
<timeless> ... it's up to the other specifications to accept a hand off
<timeless> chaals: should we put out a call for a second editor
<timeless> ericu: oh, we have a second editor
<timeless> [ sicking raises hand ]
<timeless> shepazu: you're just very busy
<timeless> sicking: there's a very small amount of this that will go into the File API
<timeless> ... for One-Load-Only
<timeless> ... do we revoke at first access or at end of microtask
<timeless> ... the other is...
<timeless> ... if you start loading, and then you revoke
<timeless> ... should that load continue
<timeless> ... i think on the second one, i don't think we've gotten feedback from you, Microsoft
<timeless> adrianba: oh, I can give you feedback:
<timeless> ... once we've started, it's very hard to stop
<timeless> sicking: ok, so once a load has started it should finish
<timeless> adrianba: for Close
<timeless> ... if you're doing File Reader on a Blob
<timeless> ... the point of calling Close
<timeless> ... is to say you really want to let go of the resources
<timeless> sicking: i considered them to be the same
<timeless> ... but we can keep them as separate
<timeless> ... we should figure out
<timeless> sicking: the first thing is ArrayBuffer v. ArrayBufferView
<timeless> ... i dislike the topic enough that i haven't followed the discussion
<timeless> Josh_Soref: +1
<timeless> sicking: I suspect that we should be using ArrayBufferView
<timeless> adrianba: we can't do that soon
<timeless> sicking: i'm happy to leave it as an OR
<timeless> adrianba: I agree that it seems like it should be the right thing
<timeless> ... by the time we could change
<timeless> ... ECMA TC39 could progress
<timeless> sicking: I suspect that even if TC39 does something, it'll be called as ArrayBufferView or subclassed as that
<timeless> ... my feeling is we do ArrayBufferView now
<timeless> ... and if something new is added, we can add it later
<timeless> adrianba: do we always know
<timeless> ... when you use these things
<timeless> ... can we reliably feature detect support for these?
<timeless> sicking: the Blob constructor is hard to detect
<timeless> anne: you could just try
<timeless> chaals: you guys drink beer tonight and solve the problems
<timeless> anne: you're going to have the need for feature detection all the time
<timeless> ... it hasn't been a real problem in practice
<timeless> adrianba: i think these are relatively new features that haven't been detected
<timeless> anne: you need browser sniffing anyway
<timeless> adrianba: for incremental things
<timeless> ... for response-type in XHR, that's a good feature detect
<timeless> ... set and retrieve
<timeless> ... some things we're adding will be harder for feature detection
<timeless> ... there's an envelop thing with Must understand
<timeless> ... and things which aren't
<timeless> chaals: I really do mean: have a beer tonight, talk about this
<timeless> PaulC: i'd like a beer too
<timeless> anne: we can discuss it tomorrow
<timeless> ... feature detection is the agenda item
<timeless> ericu: File Writer, Locking
<timeless> ... sicking has a new proposal
<timeless> ... but we need to discuss it on the list
<timeless> [ Break until 4:30 ]
<ArtB> IME UCs and Requirements
<timeless> MikeSmith: I worked with Kenji and Hironori
<timeless> ... the main cases where IMEs are important are Japan and Chinese
<timeless> s/Chinese/China/
<timeless> ... and to some extent Korea
<timeless> chaals: and Vietnam
<timeless> MikeSmith: since you don't have 10,000 keys on your keyboard
<timeless> ... you type on your keyboard, it goes into a buffer
<timeless> ... and gets converted
<timeless> ... into Kanji
<timeless> ... you don't want an IME to interfere with Games
<timeless> ... or other things
<timeless> ... similar to Screen Orientation/Pointer Lock
<timeless> ... interactively typing and getting suggestions from a web application
<timeless> ... like Google Suggest
<timeless> ... completing against things in a database in real time
<timeless> ... while you're completing against a database, you're potentially also completing against the IME
<timeless> Josh_Soref: most mobile devices also have IMEs for word completion for Latin languages
<timeless> MikeSmith: Interaction
<timeless> ... some people want feature compatibility with other runtimes
<timeless> ... Flash has the ability to interact with the System IME
<timeless> Josh_Soref: System IMEs are BUGGY AND INSECURE
<timeless> MikeSmith: as a game developer you can use this
<timeless> ... some people want to be able to provide a web based IME
<timeless> ... if you want to create a complete branded and consistent UE
<timeless> ... then if your application includes text input
<timeless> ... and you want to control IME behavior in your application
<timeless> ... then you want to be able to brand and style that
<timeless> yosuke: a lot of systems don't have IMEs installed
<timeless> ... and users don't know / can't install them
<timeless> ... so a web site might want to provide that
<timeless> ... installing that may require privileges
<timeless> MikeSmith: i don't know why we didn't put that one
<timeless> Josh_Soref: google provides that for Translate for Hebrew
<timeless> chaals: Yandex does that for Cyrillic
<timeless> MikeSmith: Hixie didn't think this was the right approach
<timeless> tantek: there's an existing CSS property ime-mode
<timeless> ... from IE5/Firefox
<timeless> ... that should address your Games case
<timeless> ... it's in CSS3 UI LC
<timeless> ... it's at risk
<timeless> ... there's a simple property there
<timeless> ... if there are UCs that are easy to add to them
<timeless> ... the spec is being locked down
<tantek>
<timeless> s/yosuke/ryosuke/
<timeless> ryosuke: there's no concept of IME on/off
<timeless> ... when you switch languages/layouts
<timeless> ... i don't think this new api addresses that either
<timeless> ... maybe there's a way to include that
<timeless> anne: given that web pages already make their own UIs
<timeless> ... it'd be helpful if there was an explanation as to why something is needed
<timeless> s/ryosuke:/rniwa:/
<timeless> chaals: when you implement it, what happens in practice is it doesn't work
<rniwa> hober: ?
<timeless> s/hober: ?//
<rniwa> timeless: ah, ok. hober: thanks
<timeless> rniwa: there's some interest in creating SVG editors
<timeless> rniwa: you want to be able to type things into the SVG
<timeless> ... and that isn't compatible with Content Editable
<timeless> Josh_Soref: some "system" IMEs are buggy and pushing insecure content into them
<rniwa> heycam: that's nice to know :D
<timeless> ... is just as dangerous as pushing data to font engines
<timeless> ... = nice root exploits
<tantek> perhaps consider an informative reference to CSS3-UI for the 'ime-mode' property
<timeless> s/heycam: that's nice to know :D//
<timeless> MikeSmith: the URL Spec
<timeless> ... the api part
i/Topic: URL/Mike: We're about ready for a FPWD, the spec is reasonably advanced. Hopefully some time this month.
<timeless> ... if you're going to expose URL information
<timeless> ... then you want a way to parse them
<timeless> ... the part before this is the algorithm for parsing
<timeless> ... there's a definition of what a URL is
<timeless> ... it isn't defined anywhere
<timeless> ... the first two parts started in the html spec
<timeless> ... but there isn't anything specific
<tantek> I've done some research on what different specs call the different parts of URLs:
<timeless> chaals: there's a gap for people to wake up
<timeless> ... CORS
<timeless> ... D3E/DOM4
<timeless> ... Testing
<timeless> ... Versions/Stabilize
<timeless> ... -- these points here have dragons
<timeless> ... Feature detection
<timeless> ... -- anne + adrianba 's item
<timeless> ... [ their action was to drink beer ]
<timeless> ... Meeting Planning
<timeless> PaulC: when will you do that?
<timeless> ... i'd like to try to be here
<timeless> ... i'd like to have our TPAC plans straight
<timeless> chaals: yes, that being our next meeting
<timeless> ... anything else people want to put in our next meeting?
<timeless> glenn: where are we drinking beer tonight?
<timeless> chaals: that's later in today's agenda
<timeless> ... 9:45-10:15 CORS w/ WebAppSec
<timeless> ... HTML Stuff
<timeless> ... Index DB
<timeless> ... [ real item ]
<timeless> ... Full screen
<timeless> anne: 10 minutes
<timeless> ArtB: what's HTML?
<timeless> krisk: Hixie specs (Sockets, Workers, ...)
<timeless> chaals: Lunch
<timeless> ... Feature detection/stability
<timeless> ... Testing
<timeless> ... Meetings
<timeless> ... - wrap up + beer
<krisk> Tied House Brewery & Cafe 954 Villa Street Mountain View, CA 94041 (650) 965-2739
<timeless> chaals: thanks all
<timeless> [ Adjourned ]
<timeless> RRSAgent: make minutes
<timeless> trackbot, end meeting
This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/scribe: timeless// Succeeded: s/observer/registered as observer, now member/ Succeeded: s/Telecom/Telecom, member/ Succeeded: s/, Microsoft/, Member/ Succeeded: s/Kris Krueger (krisk)/Kris_Krueger_(krisk)/ Succeeded: s/Glenn_Adams (glenn)/Glenn_Adams_(glenn)/ Succeeded: s/James/James_Hawkins/ Succeeded: s/slot/slot for CORS LC/ Succeeded: s/Pointer Block/Pointer Lock/ Succeeded: s/(my PC suddenly working after taking it apart and putting it together again)// Succeeded: s/Sheppers/Schepers/ Succeeded: s/ted/hober/ Succeeded: s/-->/->/ Succeeded: s/holvard/halvord/ Succeeded: s/else/else (they use "yield")/ Succeeded: s/WATIA/WATIR/ Succeeded: s|Watir:-> Watir| Succeeded: s/Quoata/Quota/ Succeeded: s/that/that's out of scope for this WG/ Succeeded: s|-> WebApps Pub Status (on screen)| Succeeded: s/^^ page on screen// Succeeded: s/wihle/while/ Succeeded: s/Scott Brown/Scott Graham/ Succeeded: s/anne:/travis:/ Succeeded: s/test/tests/ Succeeded: s/kinuko yasuda// Succeeded: s/... expect/chaals expect/ Succeeded: s/chaals/chaals:/ Succeeded: s/on the web site/well pas that point/ WARNING: Bad s/// command: s/pas that/past/that/ Succeeded: s/something/orientation/ Succeeded: s/Mike to be lead Editor/Mike to not be lead Editor but will help to drive it/g Succeeded: s/faciliateted/facilitated/ Succeeded: s/to/to Isolated/ FAILED: s/does'nt existr/doesn't exist/ FAILED: s/help of shepazu/help of ArtB and shepazu/ Succeeded: s/http/-> http/ FAILED: s|-> Shadow DOM ED| FAILED: s|-> Shadow DOM Bug Tree| FAILED: s|s/does'nt existr/doesn't exist/|| FAILED: s|2 does'nt existr|2 doesn't exist| Succeeded: s|s/pas that/past/that/|| FAILED: s|-> Polyfill (using Web Components)| FAILED: s/cameron/heycam/ FAILED: s/custom tags/custom tags: x-slider/ Succeeded: s/travis:/Travis:/g Succeeded: s/Mike_Smith/MikeSmith/g Succeeded: s/it's/[currently] it's/ FAILED: s/Present+ glenn// FAILED: s/elemnt/element/ FAILED: s/Present+ PaulKinlan// FAILED: s/Chrome/Google/ FAILED: s/XHTML/XML/ Succeeded: s/DOM/DOM)/ Succeeded: s/Bryan:/bryan:/g FAILED: s/Present+ Tony Ross/Present+ Tony_Ross/ FAILED: s|anne link to demo?|-> Web Intents Demos| FAILED: s|link to demo?|-> Web Intents Demos| Succeeded: s|ah|-> Imagemator| FAILED: s/picking/viewing/ FAILED: s/chaals: because it sucks// FAILED: s/direciton/direction/ Succeeded: s/io/to/ FAILED: s|-> Web Intents specification| FAILED: s/description/human-readable description/ FAILED: s|hi|-> Imagemator| FAILED: s/stupid/stupid (sarcastically)/ FAILED: s/way t/way to/ FAILED: s/unspoo/spoo/ Succeeded: s/paulk:/Paul_Kinlan:/G Succeeded: s/Paulk:/Paul_Kinlan:/G Succeeded: s/paulc:/PaulC:/G Succeeded: s/Dan_Druta:/DanD:/g FAILED: s/Present+ PaulKinlan// FAILED: s|s/Present+ PaulKinlan//|| FAILED: s/teh/the/ FAILED: s/ordinho/odinho/ Succeeded: s/PaulKinlan:/Paul_Kinlan:/g FAILED: s/I ahd/I had/ FAILED: s|for the minutes - s/ola/OLE|| FAILED: s/ola/OLE/ FAILED: s/taht/that/ FAILED: s/eveloper/developer/ FAILED: s|Funahashi|Funahashi, co-chair of TV/Web IG| FAILED: s|UCs:-> Push UCs| FAILED: s|Draft Bryan mentioned:-> EventSource Push (Draft)| FAILED: s/integrates/incorporates/ FAILED: s/Push SMS/Push notification/ FAILED: s/can we bake/I would like to take this point out of this discussion and put on the agenda tomorrow whether we can/ FAILED: s/Chinese/China/ FAILED: s/yosuke/ryosuke/ FAILED: s/ryosuke:/rniwa:/ FAILED: s/hober: ?// FAILED: s/heycam: that's nice to know :D// FAILED: i/Topic: URL/Mike: We're about ready for a FPWD, the spec is reasonably advanced. Hopefully some time this month. Found Scribe: Josh_Soref Found ScribeNick: timeless Found Scribe: chaals Inferring ScribeNick: chaals Scribes: Josh_Soref, chaals ScribeNicks: timeless, chaals Present: Art_Barstow Charles_McCathieNevile Josh_Soref Dimitri_Glazkov Arnaud_Braud Adam_Klein Soonbo_Han MikeSmith Paul_Kinlan PaulKinlan plh Travis_Leithead anne odinho glenn Russell_Berkoff(Samsung) Rafael_Weinstein Tony Ross Kris_Krueger_(krisk) EricU Vincent_Scheib Bryan_Sullivan Glenn_Adams_(glenn) Dan_Druta Magnus_Olsson Jonas_Sicking Doug_Schepers Tantek_Celik Ted_OConnor Greg_Billock Adrian_Bateman Wonsuk_Lee Paul_Cotton Alex_Komoroske Ojan_Vafai Yosuke_Funahashi Ryosuke_Niwa Agenda: Found Date: 01 May 2012 Guessing minutes URL: People with action items: art barstow cfc chaals start[End of scribe.perl diagnostic output]
|
http://www.w3.org/2012/05/01-webapps-minutes.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Hi there,
I am new here and would appreciate any advice.
I am using the latest versions of rails / ruby / rvm etc as of about 2 days ago (when I set everything up).
I created a new view (~app/views/stories/new.html.erb) by creating a blank document placing some code in it and saving it. When I try to view it however () it reports a routing error.
Is there something I am missing here? Do I need to somehow report the presence of this new view? I noted that in the text the index.html.erb view (which displays correctly - although I have to actually type "~stories/index" as the url rather than it just being found at "stories/ ") is created automatically when I generated the controller. I couldn't find any other way to generate a new view - except by using a scaffold? It's quite frustrating as it means I can no longer continue with the tutorial in the book!
Thanks for any advice you can give me.
Well, in order to get to the bottom of your problem, post the contents of your routes.rb file, stories_controller.rb, and the whole error message.
As far as generating views, it is possible when you generate your controllers. The following link gives you a great overview of using the command line generators.
Thanks for taking the time to help me - here are the contents of the files you requested. Please let me know if there is anything else which may be useful.
routes.rb:
Shovell::Application.routes.draw do
get "stories/index"
# The priority is based upon order of creation:
# first created -> highest priority.
# Sample of regular route:
# match 'products/:id' => 'catalog#view'
# Keep in mind you can assign values other than :controller and :action
# Sample of named route:
# match 'products/:id/purchase' => 'catalog#purchase', :as => :purchase
# This route can be invoked with purchase_url(:id => product.id)
# Sample resource route (maps HTTP verbs to controller actions automatically):
# resources :products
# Sample resource route with options:
# resources :products do
# member do
# get 'short'
# post 'toggle'
# end
#
# collection do
# get 'sold'
# end
# end
# Sample resource route with sub-resources:
# resources :products do
# resources :comments, :sales
# resource :seller
# end
# Sample resource route with more complex sub-resources
# resources :products do
# resources :comments
# resources :sales do
# get 'recent', => 'welcome#index'
# See how all your routes lay out with "rake routes"
# This is a legacy wild controller route that's not recommended for RESTful applications.
# Note: This route will make all actions in every controller accessible via GET requests.
# match ':controller(/:action(/:id))(.:format)'
end
stories_controller.rb:
class StoriesController < ApplicationController
def index
@story = Story.find(:first, :order => 'RANDOM()')
end
def new
@story = Story.new
end
end
Error message:
Routing Error
No route matches [GET] "/stories/new"
Try running rake routes for more information on available routes.
Rake routes output:
stories_index GET /stories/index(.:format) stories#index
Hey mate,
In your routes.rb file you currently only have
get "stories/index"
you'll either need to add stories/new or just add resources :stories and rake.
Yup, as t.ridge says you don't have a route defined for your new method. You can erase most of the routes file as it is all comments. To get things working you can change your routes file to the following:
Shovell::Application.routes.draw do
resources :stories
end
|
https://www.sitepoint.com/community/t/simply-rails-2-page-160-problem/14384
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
vfscanf()
Scan input from a file (varargs)
Synopsis:
#include <stdio.h> #include <stdarg.h> int vfscanf( FILE *fp, const char *format, va_list arg );
Since:
BlackBerry 10.0.0
Arguments:
- fp
- The streamfscanf() function scans input from the file designated by fp, under control of the argument format.
The vfscanf() function is a "varargs" version of fscanf().
Returns:
The number of input arguments for which values were successfully scanned and stored, or EOF when the scanning is stopped by reaching the end of the input stream before storing any values.
Examples:
#include <stdio.h> #include <stdlib.h> #include <stdarg.h> void ffind( FILE *fp, char *format, ... ) { va_list arglist; va_start( arglist, format ); vfscanf( fp, format, arglist ); va_end( arglist ); } int main( void ) { int day, year; char weekday[10], month[12]; ffind( stdin, "%10s %12s %d %d", weekday, month, &day, &year ); printf( "\n%s, %s %d, %d\n", weekday, month, day, year ); return EXIT_SUCCESS; }
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/v/vfscanf.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Flex :: Manual Click Event Trigger ?Dec 17, 2010
How to dispatch an click event : for example <mx:Button by manual event dispatch how to call that someFunction();View 1 Replies
How to dispatch an click event : for example <mx:Button by manual event dispatch how to call that someFunction();View 1 Replies
Consider the following mx:Button:<mx:ButtonIs there some way to programmatically emulate the user clicking the button?One obvious way to do it would simply be to call doSomething() which would give the same end result as clicking the button. But I'm specifically looking for ways to emulate the click -- that is something along the lines of myButton.click() (if that should have existed).View 1 Replies View Related
Is it possible to trigger Flex Piechart Item click event when a Datagrid Item is clicked.If so can anyone give some example.View 3 Replies View Related
Similar to the below in javascript:
<input id="target" type="button" onclick="..." />
<script>
document.getElementById('target').click();
</script>
I'm doing it this way :
[Code]...
But get an error that connect_btn is not defined...
UPDATE
Yeah I'm trying to simulate a click event .
Below is code that has a timer countdown that reads off of the computer. Below in bold is code to read "if it reaches the date, go to and play frame (2).
[Code]...
Below is code that is manual input - I had set up a dynamic txt field in flash named it : raffle_tix_remain When loaded on to the host I can manulally update the xml code and the change will take effect. raffle_tix_remain.text = root.loaderInfo.parameters.raffle_tix_remain;
My question: Since the raffle_tix_remain is a manual input from a user to xml Is there a way to tell flash once it refreshes and "raffle_ tix_ remain" goes to (0)zero gotoAndPlay(2); and let it play like a "sold out" sign i guess that would be a if else statement.
[Code]...
I have a problem and I have potential solution. But I wanted to confirm if there is an easy and simple way to solve my problem.App type:Isometric Game.Problem statement:I am loading images in my flash app and have mouse events attached to them.The images I load are prop images like vehicles, trees, buildings etc., and all of them are transparent.Example: Red ball asset (please ignore the yellow background which I applied to describe the problem)If I click on the actual image area (colored in red), then every thing works perfect.I don't want to trigger mouseevent when I click on empty image part (or transparent area, which I have shown in yellow color)There is one way I know by creating masks in flash. I don't want to do it unless that is the final option left because I load image assets instead of flash assets and I don't want to create a new mask asset for all the assets.There is another method I was going to adopt by using getPixel method of Bitmap.
But there is another problem with this method.I might be able to ignore the click event when I click on the empty part of the asset but if there is some other asset is behind the image in the same location, then I need to process the click event for the occluded image.Well, thinking of solution to this problem takes me to the getObjectsUnderPoint where I can scan the occluded assets
This might be an easy one for you DataGrid experts out there. I following an example for adding rows to a DataGrid dynamically from within a row
[URL]
My tweak that I am trying to acoomlish, is to have a custom itemEditor that is a form with two TextInputs and an OK button. For the life of me I can't get that button to trigger the DataGrid's itemEditEnd event where I have some processing before I call destroyItemEditor. I tried dispatching the event myself directly but got a strange error in DataGrid's updateDisplayList saying editedItemPosition was null (editedItemPosition.rowIndex).
I'm wondering if there's a way to configure a FLEX button so it behaves like a push button...View 3 Replies View Related
Within a specific canvas, I would like a user to be able to press a combination of keys which will trigger an event.(a bit like a cheat in an old megadrive game). Not sure where to start though. Anyone know if it is possible and if so could you give me a clue with how to start?View 2 Replies View Related
I have a List component that has drop-in CheckBox itemEditor that also serves as the itemRenderer. It displays each item as a simple CheckBox with a label.
However, the itemEditEnd Event does not get triggered until I click on something outside of the List. I want it triggered once the CheckBox is checked or unchecked.
I was thinking of manually dispatching the ListEvent.ITEM_EDIT_END in a CLICK Event handler, but then the itemEditEnd Event would get dispatched twice. There's gotta be a better way to do this.
what should i do to get chart's data on the click of respective data Legend. suppose i have array [{id:123, label:sales, year:2010},{id:124, label:refunds, year:2010}]for a column chart which has year in x-axis and sales iny-axis.two legend showing labels sales and refund.What i want is to get the whole data (id:123, label:sales, year:2010) on clicking of the legend 'sales'.What should i do? I tried listening mouse click event and itemClick event.View 1 Replies View Related
How to trigger a custom jQuery event from Flash, passing some data through event object?View 2 Replies View Related
This task doesn't seem too tough, but it has been blocking me for the last couple hours. I am doing a stacked bar chart, and I want the labels to be horizontally and vertically centered within each Bar Segment. The labels are set to be "inside". Such, you can easily center the label horizontally by setting label-align:middle, but there doesn't seem to be anything that can handle the vertical aspect.
Next approach was to create a custom component of the Bar Chart, but that go extremely messy when I was messing with the rendering functions. I thought it would be just modifying this line: v.labelY=v.y + barSeries.seriesRenderData.renderedYOffset - barSeries.seriesRenderData.renderedHalfWidth; but it hasn't worked. Attached is what the bar chart looks like now. And just to clarify, I would like these labels(manual in the picture) to be vertically centered.
I have a bar chart with the vertical axis this way
<mx:verticalAxis >
<mx:CategoryAxis
</mx:verticalAxis>
I would for the labels on the vertical axis to be clickable. So when a user clicks a label a click event fires and I can do something with it. I am not interested in clicking the bar itself (I know how to achieve that)
I tried adding an event listener to the CategoryAxis of type Mouse.Click but nothing gets fired.
Flex data grid has 1 default row created. When user clicks on the second row, a new row needs to be created and made editable.
Here is what works already - user tabs over the columns and when the user tabs while in the last column, a new row is created with default values.
Here is also what already works - user click a button outside the grid, which adds a new row.
(itemEditBegin and itemEditEnd have been implemented)
Here is what does NOT work: When I "single click" on the second row (no data yet - row is null), how do I detect that the currently clicked row is the second row and make it editable? Can I figure out the rowIndex from MouseEvent and use this to add a new row?
Find code below:
<mx:DataGrid
[Code]....
So I'm trying to build a tool that will allow me and other users to all open the same .swf, and then I as the Admin user am able to interact with mine while they all see my mouse movements and button clicks etc on theirs.I'm using BlazeDS to manage this and I'm getting data sent back and forth etc - no difficulties there. The issue I'm running into is this:
In an "Admin" instance, I click a button. I capture that X and Y, then tell Blaze to tell my clients to dispatch a Click event at that X and Y. On my client side, I get that data and dispatch a Click event at that X and Y - but the click is actually caught at the stage level. The click on my client side takes place UNDER all of my buttons and other content - so the whole thing fails.Is there a way to tell it to start the click event at the top?
One of my decoration bitmaps covers up some important elements in my flex application. The problem is these elements become not clickable. How could make the bitmap not clickable or how could I pass the click event along to those children elements below?View 2 Replies View Related
Can i create img map and then add several click event?for example this imgI have 4 area for event top-left, top-right etc.View 2 Replies View Related
Is there a way to write a custom event that gets triggered when the user clicks outside of that custom component instance? Basically anywhere else in the main flex app.View 2 Replies View Related
i'm trying to embed a swf to my as3 flex project like this:
[Embed(source = "../assets/next_button.swf")]
[Bindable]
protected var nextButtonClass:Class;
protected var next_btn:MovieClip = next_btn = new nextButtonClass() as MovieClip;
// ...
next_btn.addEventListener(MouseEvent.CLICK, onAdChange);
next_button.swf is as2 and created with adobe flash cs4. there is a single button inside it.
if i change type of button symbol to movieclip at next_button.fla, there is no problem at passing CLICK event.
i tried to cast next_btn to mx.controls.Button and fl.controls.Button classes, next_btn is becoming null in that case.
by the way button is reacting mouseover and click events properly just doesn't pass it to upper swf.
is there any trick i can do to pass Button events to my container swf?
I want change the HBox's style when click any object inside this HBox. I set handle for click event of HBox, and then I found it very difficult to select item in the combobox in this HBox.When I click the combobox, it drops down its item list, and HBox style changed, then combobox drop up very quickly, I have no time to select an item in the Combobox.Here is my codes, is there any way to avoid this problem?
<mx:Repeater
<mx:HBox
<mx:ComboBox
[code].....
<s:ViewNavigator
<s:ViewNavigator
Now, I know if you click on "trends" then firstView "views.TrendsView" will be shown. Now you are in that view and click again on "trends" (bottom nav bar) which event will flex dispatch?
I'm trying to figure out a way to have a button basically trigger right arrow key when it is clicked.View 2 Replies View Related
I want a link to trigger a sound click in flash AS3. I've taken the .play() outside of the function to confirm that it works by itself. What am I missing that will let me call an AS3 function from javascript?
Here is my html
<object width="5px" height="5px">
<param name="movie" value="play_coin_sound/playCoin.swf?v=5">
<param name="wmode" value="transparent">
<embed src="play_coin_sound/playCoin.swf?v=5" width="5px" height="5px">
[Code] .....
add a click event handler to the vertical axis (or any axis) of a barchart in flex? If I add the handler to the BarChart itself, it looks as though the event doesn't fire unless you click on the actual chart, not the axes.View 1 Replies View Related
I have a VBox, I assigned a handler to click, and inside the VBox I have components such as images and texts with no handler assigned for click. Would the click function be called when I click on the text and image? If not how can I make it so without assigning handlers individually, but on the container level?View 3 Replies View Related
I registered a very simple button event listener in A.mxml:
<mx:Script><![CDATA[
import mx.controls.Alert;
public function Handler():void
[code]....
It works fine when clicking the button everytime.Now I want to have something interesting,that is,I want to capture this buttonClick Event in another mxml file,say B.mxml and do something in B.mxml instead of A.
I am currently working with a drawing tool for a mapping API, and every time I double-click the mouse a map service will perform a measurement and display the length of the line that I am drawing.
I want to mimic this double-click manually by dispatching a MouseEvent.DOUBLE_CLICK, but the map service is not listening to this default Flex event. I suspect that the API has created a custom MapMouseEvent or something like it that is being dispatched when a user double-clicks the mouse.
Is there a way to determine which event is being dispatched when I double-click the mouse?
I have created a linkbar with two labels. Now, I need to keep a track of the label clicks.i.e. If in the beginning "First" is clicked, the details will be displayed. After that, if without submitting the details, "Second" is clicked then an alert message should come to inform the user that "First is still in progress, do you want to cancel and begin Second operation". Vice-versa for Second to First transition. I need to know how to write events to keep track of which button clicked.View 1 Replies View Related
|
http://flash.bigresource.com/flex-Manual-click-event-trigger--A2aLfLMNX.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Flash CS4 Resources
Using Flash
ActionScript 3.0 and Components
ActionScript 2.0 and Components
Adobe AIR
Flash Lite
Extending Flash
Sound files that use the mp3 format can contain additional
data about the sound in the form of ID3 tags.
Not every mp3 file contains ID3 metadata. When a Sound object
loads an mp3 sound file, it dispatches an Event.ID3 event
if the sound file contains ID3 metadata. To prevent run-time errors,
your application should wait to receive the Event.ID3 event
before accessing the Sound.id3 property for a loaded sound.
The following code shows how to recognize when the ID3 metadata
for a sound file has been loaded:
import flash.events.Event;
import flash.media.ID3Info;
import flash.media.Sound;
var s:Sound = new Sound();
s.addEventListener(Event.ID3, onID3InfoReceived);
s.load("mySound.mp3");
function onID3InfoReceived(event:Event)
{
var id3:ID3Info = event.target.id3;
trace("Received ID3 Info:");
for (var propName:String in id3)
{
trace(propName + " = " + id3[propName]);
}
}
This code starts by creating a Sound object and telling it to
listen for the Event.ID3 event. When the sound
file’s ID3 metadata is loaded, the onID3InfoReceived() method
is called. The target of the Event object that is passed to the onID3InfoReceived() method
is the original Sound object, so the method then gets the Sound
object’s id3 property and then iterates through
all of its named properties to trace their values.
|
http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS5b3ccc516d4fbf351e63e3d118a9b90204-7d18.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Agreements
FAO's Emergency Prevention System for Transboundary
Animal and Plant Pests and Diseases (EMPRES) have published
the eighth number of their Transboundary Animal Disease
Bulletin. The bulletin can be read on the EMPRES
web site or downloaded
in pdf.
The latest bulletin includes news items on:
as well as many other topical issues in this field.
EMPRES also run an electronic
discussion group that aims to facilitate interaction and
improved communication among subscribers involved in
transboundary animal diseases and emergency prevention
systems.
EMPRES
16 March 1999
The Ministerial Meeting on the Implementation of the Code
of Conduct for Responsible Fisheries closed 11 March with
the endorsement of new voluntary International Plans for the
Management of Fishing Capacity, for the Conservation and
Management of Sharks and for Reducing Incidental Catch of
Seabirds in Long-line Fisheries, recently adopted by the
FAO
Committee on Fisheries.
Ministers and Senior Representatives from some 120
countries attending the two-day meeting at FAO headquarters
expressed their concern about "overfishing of the world's
major marine fishery resources, destructive and wasteful
fishing practices and excess capacity." The countries also
declared that they would develop a "global plan of action to
deal effectively with all forms of illegal, unregulated and
unreported fishing including fishing vessels flying 'flags
of convenience'".
Go to Press
release
Ministerial
Meeting on the Implementation of the Code of Conduct for
Responsible Fisheries
12 March 1999
Adverse weather and civil strife are the cause of serious
food shortages in some countries in sub-Saharan Africa,
according to the latest Foodcrops and Shortages report, the
first of 1999. "Serious concern mounts over deteriorating
food situation in Somalia", according to the report. A sixth
poor "Deyr" crop in succession and renewed fighting in many
areas of the country have combined to increase the number of
people in search of food and water. Other countries in the
region reported to be suffering food difficulties are
Tanzania,
Kenya, Liberia
and Sierra
Leone, Democratic Republic of Congo and the Republic of
Congo, as well as Guinea Bissau. Good prospects for crops
are expected in most of the southern part of the
continent.
Elsewhere, food shortfalls are reported in much of
Central
America and the Caribbean, still suffering from the
effects of hurricanes Mitch and Georges. In Asia, food
security in Afghanistan and Korea DPR remains fragile, and
malnutrition continues to be a problem in Iraq, despite some
improvement in the overall food situation following the
implementation of the oil-for-food deal. A recovery in rice
production is expected in Indonesia, where a combination of
El Niño drought and the financial crisis last year
seriously compromised food security.
Go to Foodcrops
and Shortages No. 1, February 1999
An FAO/WFP Crop and Food Supply Assessment Mission to Lao
People's Democratic Republic has found that rice harvests in
1998/99 have been healthy so far, despite reports of an
unfavourable food outlook. Although localized drought was
reported during and after transplanting, the 1998 monsoon
rice crop is 7 percent above average. Total paddy output for
1998/99 is estimated at some 1.8 million tonnes, 22 percent
above the average for the past five years and one percent up
from last year. The Special Report issued by the Mission on
4 March says that, "on current production estimates, rice
import requirements in 1999 will be minimal, estimated at
around 3 000 tonnes, all to be met commercially". However,
the report warns that many rural households have
insufficient access to food, despite increased national
production. "This situation is exacerbated by relatively
high world commodity prices, a rapidly depreciating currency
and a sizeable fiscal deficit."
Go to the Special
11 March 1999
FAO hosts three ministerial meetings in the week 8 to 12
March. Forestry, the implementation of the Code of Conduct
for Responsible Fisheries and Agriculture in Small Island
Developing States are the topics for discussion.
The Ministerial Meeting on Sustainability Issues in
Forestry, the National and International Challenges,
convened 8 and 9 March, provides a forum for global decision
on strategic and policy issues related to forestry. Some of
the items under discussion include: the need for
international instruments to support sustainable forest
develoment; global action to address forest fires; and the
proposed FAO Strategic Framework for the years 2000 to
2015.
Ministerial
Meeting on Sustainability Issues in Forestry, the National
and International Challenges
The Ministerial Meeting on the Implementation of the Code
of Conduct for Responsible Fisheries is scheduled for 10 and
11 March. Topics under consideration include management of
fishing capacity as well as the potential role of
eco-labelling of fish and fishery production in support of
responsible fisheries.
Ministerial
Meeting on the Implementation of the Code of Conduct for
Responsible Fisheries
Sustainable production, intensification and
diversification of agriculture, forestry and fisheries in
small island developing states (SIDS) is the focus of the
special meeting slated for 12 March. The international
conference aims to develop a mission-specific plan of action
consisting of programmes and projects for the sustainable
agricultural development of SIDS, recognizing the specific
constraints facing these small island nations.
Special
Ministerial Conference on Agriculture in Small Island
Developing States
9 March 1999
The latest bulletin reports a generally calm desert
locust situation in February despite two small outbreaks in
northeastern Sudan and southeastern Libya. Continuing
control operations were credited with containing these
unrelated incidents. According to the bulletin, "The Libyan
outbreak does not threaten neighbouring countries or
regions."
Elsewhere, unusually dry conditions have prevented any
significant developments in breeding areas along the Red Sea
coasts. However, good rains have started to fall in the
spring breeding areas of western Pakistan where low numbers
of adults are present and may start to breed.
Desert Locust Bulletin 245 reports on the general locust
situation during February 1999 and provides a forecast until
mid-April 1999.
Go to the Latest
Desert Locust Situation and Forecast
The World Health Organization (WHO) and FAO have made a
joint statement saying that the risk of infection by the
Rift Valley fever virus is back down to minimal or
negligible levels in countries in the Horn of Africa, after
a devastating epidemic that lasted from October 1997 to
March 1998. The countries concerned are Tanzania, Kenya,
Somalia and Ethiopia. The improved situation is the result
of favorable climatic conditions and the immunity developed
by a large proportion of the livestock during the recent
epidemic. The joint statement said of livestock exports from
the countries in the Horn of Africa, "the present extremely
low risk of Rift Valley fever infection in livestock is
comparable to the risk in former years that permitted the
safe export of livestock."
FAO/WHO
joint statement
FAO
supports budding small livestock and meat export industry in
Tanzania
Pastoralists in eastern
Africa hard hit by Rift Valley fever and other
diseases
Since the last GIEWS Special Alert for Angola in December
1998, the food outlook in the southwest African country has
become "increasingly bleak", according to an Alert issued on
18 February. Intensified fighting since the end of the year,
particularly in the central highlands and the northern
provinces has forced more and more people to flee their
homes and fields, aggravating what the Alert calls "an
already precarious food situation in several parts of the
country". Food prices have risen sharply in many areas and
difficulties in distributing relief assistance are leading
to growing levels of malnutrition. Because of the fighting,
"the 1999 crop is expected to be sharply below the output in
recent years", according to the Alert, which closes
stressing the "urgent need for the international community
to do everything possible to ensure that adequate
humanitarian assistance is provided to the affected Angolan
population".
Go to the full Special
Alert
An FAO/WFP Crop and Food Supply Assessment Mission to
Cambodia has found that fears of reduced rice harvests have
proved largely unfounded. According to the Special Report,
posted 17 February 1999, despite drought and scattered pest
infestations, the wet (main) season paddy production for
1998/99 is estimated at 2.88 million tonnes, 8 percent up
from last year. Taking into account the drop of about 14
percent in the dry season paddy harvest, total rice
production for 1998/99 is estimated at 3.52 million tonnes -
3 percent up from last year. The Mission forecasts a small
surplus of nearly 30 000 tonnes of rice, but warns that
despite this, "vulnerable segments of the population will
face varying degrees of food shortage in 1999" and urges the
Government to "be cautious with regard to decisions on rice
export in 1999."
Go to the full Special
3 March 1999
©FAO,
1999
|
http://www.fao.org/english/newsroom/highlights/1999/brief/BR9903-e.htm
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
This is a class containing a parse function for conditions. More...
#include <rtt/scripting/ConditionParser.hpp>
This is a class containing a parse function for conditions.
It is used by ProgramParser, and probably other parser's too in the future...
Definition at line 61 of file ConditionParser.hpp.
conditions used to be more complex, but nowadays, they're just boolean expressions..
Definition at line 52 of file ConditionParser.cpp.
Call this to get the parsed condition.
If you use it, you should subsequently call reset(), otherwise it will be deleted in the ConditionParser destructor..
Definition at line 99 of file ConditionParser.cpp.
References RTT::internal::DataSource< T >::get().
|
http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1scripting_1_1ConditionParser.html
|
CC-MAIN-2015-48
|
en
|
refinedweb
|
Welcome to the Ars OpenForum.
If you're in London and know Python/Rust hit me up I'm looking for a junior dev.
Clearly someone wants to go backpacking through Europe and see mount Tibidabo …
Anyone using Pulumi for IaC? I've played with a bit and it seems to be Terraform, except not using a braindead language, which is nice. Curious if anyone has experiences, good or bad, with this tool. The cost model for the Pulumi Service rings some alarm bells for me - objects times hours = "Pulumi credits" which are then billed. Compared to Terraform Cloud, which bills per user, this model is not as attractive for an organization that is all in on IaC and automating a large number of cloud resources with relatively few engineers. Self-managed backend via S3/KMS is workable, but the size of the community seems much smaller than Terraform, which gives me a bit of pause when comparing open source options.
Everyone I know that uses Pulumi loves Pulumi. All of those people have come from Cloud Formation, terraform, etc.
I want to learn from my mistakes and learn to love python.I really want to be a positive force in my org.I crave semi-reproducible builds between CI and workstations.Even with that, poetry is such a letdown compared to cargo. Hell, even nuget would work better. The python ecosystem is such an orobouros at this point.
I am considering two options:1- everyone has to use the same docker image as the CI server and PROD instance, with a non-root account 2- RIIR (all I need is an alternative to cvxpy in Rust for that to happen)Cargo mostly gets it right, apart from the way crate features are additive across a workspace. There is an alternative, opt-in features resolver that can fix that, but features of dependencies not being namespaced is a huge pain.
I don't know, once you have a sane way of defining and locking dependencies... it mostly works. I switched to Poetry some time ago, and I don't really have any major problems with it. Heck, I may have some stockholm syndrome, but if you know what you're doing, setup.py can kinda work.(Mostly talking about projects with mostly Python dependencies and not too weirdness inside.)...Is there any language out there with native/first-class-citizen-ish AST transforms? I'm thinking something where you can implement macroish stuff with AST transforms, or even provide users of your library with an AST transform they can apply to their source code to update? Or a powerful way to write safe refactoring tools (maybe such a language wouldn't need reflection at all, so refactoring could be *very* safe). These AST transforms could be one-offs (apply this transform to the entire codebase, save the result) or ongoing (declare this set of transforms in the build definition- it's applied on the go before execution).I'm thinking a Rusty-Haskelly language (but preferably one with GC and with nice compiler errors) with such feature would be very nice to work with.
Separately, the LLVM/Clang toolchain makes it somewhat feasible to develop tools that can transform C or C++ code for large-scale adaptations. Google apparently just does this as a matter of course, anytime someone decides that some particular library API needs improvement, and so revises the entire world around it.Rust also has syntax macros that I understand to be fairly wide-ranging, though I don't have anywhere near the familiarity necessary to speak to their capability. They've also talked about providing tools to migrate code across 'editions' of the language, if one wants to modernize it, rather than rely on the guaranteed backward compatibility in the compilers. Again, I don't really have a grasp of what's out there.
In Rust we use the syn parser, and transform the AST using quote - it's quite powerful, although done at build time, and the tooling doesn't support debugging the build.rs compilation step.
AOP... is runtime, mostly, no?I wasn't explicit, but I was thinking purely statically typed languages. I read about static typing extensions for Lisp, and those intrigue me... But I also think doing one-off transforms is not a thing in Lisp? (Not very much into the ecosystem, so I can be very wrong). In any case, I think Lisp ain't exactly what I'm looking for.Rust is a bit of an inspiration to my question- but the "prefer a GC" is maybe too weak- I was really looking for something simpler than Rust (and Haskell- too unfriendly...).
R is... a force to be reckoned with! Koala : Have you had a look at Julia? even C#. With Roslyn it's pretty simple nowadays to emit code. I used Reflection.IL back in the days (to generate my own stack-based computations from an IQobservable tree...) Actualy now that I am writing it, you may want to have a look at C# and IEnumerable <> IQueryable, where the latter is an AST over the former that can be altered then compiled. If you work in reactive space you should have a look at IObservable <> IQuobservable. There are some amazing videos about that on Channel9 with Bart De Smet - great intro to FP for myself back in the days.Or peruse the docs and look for Expression<Func<...>> - there might be some new advancements since I last coded professionally in .Net. My knowledge covers 2001 - 2019 (C# 8). I remember that one could write something like
Yeah, "much more of that becomes a developer responsibility rather than being a language designer responsibility" is part of what I wanted to avoid- esp. I want "safe [automatic] refactors".R is dynamic too, I think. Julia turned up in a few searches, so I'll have a look at that. And I need to play with .NET more; F# might be the answer to some other questions that lurk in my head, so it wouldn't hurt to look at C# too.
In practice, Java, C# catch many more type errors before runtime than Python et al. Sure, erasure in generics is idiotic- in practice, since Java generics were introduced, I don't remember any issue with putting an object of the wrong type in a collection [to be fair, I didn't have too many issues with that *before* either. But the constant casting was exhausting].So I think there's a clear distinction between the typing disciplines of Java/C# and those of Python etc. You can call it whatever you want- I think most people still will understand better if we call them static and dynamic typing.
(I do agree, though, about compiled vs. interpreted. It was never such a useful distinction, IMHO- except for the "this has a compile step that can detect many issues that in other languages are runtime errors. To me, the interesting distinction is "do I need to install some piece of software before running programs written in X?". That line is a bit hazy too, but definitely Go/Rust/C/etc. mostly work, while Python/Java still are challenging- sometimes it works, but often you run into big issues...)
(also, obligatory "funny" link as per my last post . Unfortunately, digging around, it seems that the project seems more interested in embedding an x86_64 emulator for other CPUs rather than making fatter binaries...)
The tooling benefits are there- I've done plenty of refactorings, and I for surely think their benefit is bigger than the (small) effort I put in writing down types. After a phase where I thought preprocessing is evil (mostly due to associating preprocessing to the C preprocessor), now I think that reflection is evil (after using Rust instead, which has "preprocessing done right". In fact, this is where my original line of thought in this thread came from. Say yes to ... rification , for example).BTW, note that I think in general, practitioners of Haskell, Rust or other languages with type inference still write down types in many places. Not only to lock down some kinds of instabilities, but also because it's actually useful information (mouse-overing gets tiring after a while- I thought that editors could display type definition constantly and be nice, until I used the Rust VS Code plugin, which had this as default...).
I've found the type rarely matters, and anyway I have IntelliSense.var really shines matters when you start wholesale refactoring method signatures and whatnot, especially inside libraries where you may not want duck typing for perf reasons. (Return a set instead of a list, for example.)I find in application scenarios, my vars are usually primitives, or return values from methods, and my method return types are usually something like IReadOnlyCollection<T>. Do I benefit from writing IReadOnlyCollection<T> foo = GetSomeCollection(); ? I don't see that I do. And anyway, I can roll over the var and see its type if I care that much. In fact, it seems like a recipe for casting to less expressive types out of ignorance. ("Well I don't remember what this is, so I'll call it IEnumerable<T> foo for now, and come back and rename and get the type right later." which is a thing I do _all the time_ when I'm trying out an idea, and the extra typing just gets in my way. If I use var, I don't have to care about that. The number of times I type var foo = baz.Something in a given day is more than I have fingers and toes. And yes, I *always* rename the foo to something once I actually know what foo *is*, which is often a process of exploration and discovery.)
Any idea why pandas would return an empty dataframe from an .xlsx that is decidedly not empty? Had a streamlit app working one day, and now, there is no data in my excel file according to Pandas. Was there an update I missed or something?EDIT** Never mind, through my pd.read into a function that was returning nothing. Now it returns something. I'm dumb.
Tangentially related to the code transformations discussion, but Comby is a neat tool that's worth a look. I've used it to help out with annoying things like switching from NUnit to xUnit with FluentAssertions where a simple search/replace or regex isn't good enough.
Took me all day and cryptic Reddit and/or stack overflow posts to realize that Pandas really doesn't like column titles with spaces. That seriously took way too fucking long you fucking nonce of a library.
In the end, you also have to go with the market reality.Ruby, Smalltalk, etc. ecosystems are not viable in many scenarios. So if I'm going to use a dynamic language it's going to be Python- whose tooling is inferior to C#/Java. I'll use Python if I'm going to benefit from its qualities (scientific environment, Django admin, etc.) or my team is more used to that, or similar reasons.You could make an argument about Clojure- because you can leverage JVM libraries... but I'm not seeing it, really.
|
https://arstechnica.com/civis/viewtopic.php?f=20&p=41137305&sid=689b6295c776969c518287dc7dcdc314
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
"ble/BLE.h" #include "pretty_printer.h" #include "mbed-trace/mbed_trace.h" /** This example demonstrates all the basic setup required * to advertise and scan. * * It contains a single class that performs both scans and advertisements. * * The demonstrations happens in sequence, after each "mode" ends * the demo jumps to the next mode to continue. * * You may connect to the device during advertising and if you advertise * this demo will try to connect during the scanning phase. Connection * will terminate the phase early. At the end of the phase some stats * will be shown about the phase. */ /* demo config */ /* you can adjust these parameters and see the effect on the performance */ /* Advertising parameters are mainly defined by an advertising type and * and an interval between advertisements. Lower interval increases the * chances of being seen at the cost of increased power usage. * * The Bluetooth controller may run concurrent operations with the radio; * to help it, a minimum and maximum advertising interval should be * provided. * * Most bluetooth time units are specific to each operation. For example * adv_interval_t is expressed in multiples of 625 microseconds. If precision * is not require you may use a conversion from milliseconds. */ static const ble::AdvertisingParameters advertising_params( ble::advertising_type_t::CONNECTABLE_UNDIRECTED, ble::adv_interval_t(ble::millisecond_t(25)), /* this could also be expressed as ble::adv_interval_t(40) */ ble::adv_interval_t(ble::millisecond_t(50)) /* this could also be expressed as ble::adv_interval_t(80) */ ); /* if the controller support it we can advertise multiple sets */ static const ble::AdvertisingParameters extended_advertising_params( ble::advertising_type_t::NON_CONNECTABLE_UNDIRECTED, ble::adv_interval_t(600), ble::adv_interval_t(800) ); static const std::chrono::milliseconds advertising_duration = 10000ms; /*. */ static const ble::ScanParameters scan_params( ble::phy_t::LE_1M, ble::scan_interval_t(80), ble::scan_window_t(60), false /* active scanning */ ); static const ble::scan_duration_t scan_duration(ble::millisecond_t(10000)); /* config end */ events::EventQueue event_queue; using namespace std::chrono; using std::milli; using namespace std::literals::chrono_literals; /* Delay between steps */ static const std::chrono::milliseconds delay = 3000ms; /** Demonstrate advertising, scanning and connecting */ class GapDemo : private mbed::NonCopyable<GapDemo>, public ble::Gap::EventHandler { public: GapDemo(BLE& ble, events::EventQueue& event_queue) : _ble(ble), _gap(ble.gap()), _event_queue(event_queue) { } ~GapDemo() { if (_ble.hasInitialized()) { _ble.shutdown(); } } /** Start BLE interface initialisation */ void run() { /* handle gap events */ _gap.setEventHandler(this); ble_error_t error = _ble.init(this, &GapDemo::on_init_complete); if (error) { print_error(error, "Error returned by BLE::init"); return; } /* (_gap.isFeatureSupported(ble::controller_supported_features_t::LE_2M_PHY)) { ble::phy_set_t phys(/* 1M */ false, /* 2M */ true, /* coded */ false); ble_error_t error = _gap.setPreferredPhys(/* tx */&phys, /* rx */&phys); /* PHY 2M communication will only take place if both peers support it */ if (error) { print_error(error, "GAP::setPreferedPhys failed"); } } else { /* otherwise it will use 1M by default */ } /* all calls are serialised on the user thread through the event queue */ _event_queue.call(this, &GapDemo::advertise); } /** Set up and start advertising */ void advertise() { ble_error_t error = _gap.setAdvertisingParameters(ble::LEGACY_ADVERTISING_HANDLE, advertising_params); if (error) { print_error(error, "Gap::setAdvertisingParameters() failed"); return; } /* to create a payload we'll use a helper class that builds a valid payload */ /* AdvertisingDataSimpleBuilder is a wrapper over AdvertisingDataBuilder that allocated the buffer for us */ ble::AdvertisingDataSimpleBuilder<ble::LEGACY_ADVERTISING_MAX_SIZE> data_builder; /* builder methods can be chained together as they return the builder object */ data_builder.setFlags().setName("Legacy Set"); /* Set payload for the set */ error = _gap.setAdvertisingPayload(ble::LEGACY_ADVERTISING_HANDLE, data_builder.getAdvertisingData()); if (error) { print_error(error, "Gap::setAdvertisingPayload() failed"); return; } /* Start advertising the set */ error = _gap.startAdvertising(ble::LEGACY_ADVERTISING_HANDLE); if (error) { print_error(error, "Gap::startAdvertising() failed"); return; } printf( "\r\nAdvertising started (type: 0x%x, interval: [%d : %d]ms)\r\n", advertising_params.getType(), advertising_params.getMinPrimaryInterval().valueInMs(), advertising_params.getMaxPrimaryInterval().valueInMs() ); #if BLE_FEATURE_EXTENDED_ADVERTISING /* if we support extended advertising we'll also additionally advertise another set at the same time */ if (_gap.isFeatureSupported(ble::controller_supported_features_t::LE_EXTENDED_ADVERTISING)) { /*.createAdvertisingSet(&_extended_adv_handle, extended_advertising_params); if (error) { print_error(error, "Gap::createAdvertisingSet() failed"); return; } /* we can reuse the builder, we just replace the name */ data_builder.setName("Extended Set"); /* Set payload for the set */ error = _gap.setAdvertisingPayload(_extended_adv_handle, data_builder.getAdvertisingData()); if (error) { print_error(error, "Gap::setAdvertisingPayload() failed"); return; } /* Start advertising the set */ error = _gap.startAdvertising(_extended_adv_handle); if (error) { print_error(error, "Gap::startAdvertising() failed"); return; } printf( "Advertising started (type: 0x%x, interval: [%d : %d]ms)\r\n", extended_advertising_params.getType(), extended_advertising_params.getMinPrimaryInterval().valueInMs(), extended_advertising_params.getMaxPrimaryInterval().valueInMs() ); } #endif // BLE_FEATURE_EXTENDED_ADVERTISING _demo_duration.reset(); _demo_duration.start(); /* this will stop advertising if no connection takes place in the meantime */ _cancel_handle = _event_queue.call_in(advertising_duration, [this]{ end_advertising_mode(); }); } /** Set up and start scanning */ void scan() { ble_error_t error = _gap.setScanParameters(scan_params); if (error) { print_error(error, "Error caused by Gap::setScanParameters"); return; } /* start scanning and attach a callback that will handle advertisements * and scan requests responses */ error = _gap.startScan(scan_duration); if (error) { print_error(error, "Error caused by Gap::startScan"); return; } printf("\r\nScanning started (interval: %dms, window: %dms, timeout: %dms).\r\n", scan_params.get1mPhyConfiguration().getInterval().valueInMs(), scan_params.get1mPhyConfiguration().getWindow().valueInMs(), scan_duration.valueInMs()); _demo_duration.reset(); _demo_duration.start(); } /* helper function to hide the casts */ int read_demo_duration_in_ms() { return duration_cast<duration<int, milli>>(_demo_duration.elapsed_time()).count(); } private: /* Gap::EventHandler */ /** Look at scan payload to find a peer device and connect to it */ void onAdvertisingReport(const ble::AdvertisingReportEvent &event) override { /* || !ble::adv_data_flags_t(field.value[0]).getGeneralDiscoverable()) { continue; } /* connect to a discoverable device */ /* abort timeout as the mode will end on disconnection */ _event_queue.cancel(_cancel_handle); printf("We found a connectable device\r\n"); ble_error_t error = _gap.connect( event.getPeerAddressType(), event.getPeerAddress(), ble::ConnectionParameters() // use the default connection parameters ); if (error) { print_error(error, "Error caused by Gap::connect"); return; } /* we may have already scan events waiting * to be processed so we need to remember * that we are already connecting and ignore them */ _is_connecting = true; return; } } void onAdvertisingEnd(const ble::AdvertisingEndEvent &event) override { ble::advertising_handle_t adv_handle = event.getAdvHandle(); if (event.getStatus() == BLE_ERROR_UNSPECIFIED) { printf("Error: Failed to stop advertising set %d\r\n", adv_handle); } else { printf("Stopped advertising set %d\r\n", adv_handle); if (event.getStatus() == BLE_ERROR_TIMEOUT) { printf("Stopped due to timeout\r\n"); } else if (event.getStatus() == BLE_ERROR_LIMIT_REACHED) { printf("Stopped due to max number of adv events reached\r\n"); } else if (event.getStatus() == BLE_ERROR_NONE) { if (event.isConnected()) { printf("Stopped early due to connection\r\n"); } else { printf("Stopped due to user request\r\n"); } } } #if BLE_FEATURE_EXTENDED_ADVERTISING if (event.getAdvHandle() == _extended_adv_handle) { /* we were waiting for it to stop before destroying it and starting scanning */ ble_error_t error = _gap.destroyAdvertisingSet(_extended_adv_handle); if (error) { print_error(error, "Error caused by Gap::destroyAdvertisingSet"); } _extended_adv_handle = ble::INVALID_ADVERTISING_HANDLE; _is_in_scanning_phase = true; _event_queue.call_in(delay, [this]{ scan(); }); } #endif //BLE_FEATURE_EXTENDED_ADVERTISING } void onAdvertisingStart(const ble::AdvertisingStartEvent &event) override { printf("Advertising set %d started\r\n", event.getAdvHandle()); } void onScanTimeout(const ble::ScanTimeoutEvent&) override { printf("Stopped scanning due to timeout parameter\r\n"); _event_queue.call(this, &GapDemo::end_scanning_mode); } /** This is called by Gap to notify the application we connected, * in our case it immediately disconnects */ void onConnectionComplete(const ble::ConnectionCompleteEvent &event) override { _is_connecting = false; _demo_duration.stop(); #if BLE_FEATURE_EXTENDED_ADVERTISING if (!_is_in_scanning_phase) { /* if we have more than one advertising sets one of them might still be active */ if (_extended_adv_handle != ble::INVALID_ADVERTISING_HANDLE) { /* if it's still active, stop it */ if (_gap.isAdvertisingActive(_extended_adv_handle)) { _gap.stopAdvertising(_extended_adv_handle); } else if (_gap.isAdvertisingActive(ble::LEGACY_ADVERTISING_HANDLE)) { _gap.stopAdvertising(ble::LEGACY_ADVERTISING_HANDLE); } } } #endif // BLE_FEATURE_EXTENDED_ADVERTISING if (event.getStatus() != BLE_ERROR_NONE) { print_error(event.getStatus(), "Connection failed"); return; } printf("Connected in %dms\r\n", read_demo_duration_in_ms()); /* cancel the connect timeout since we connected */ _event_queue.cancel(_cancel_handle); _cancel_handle = _event_queue.call_in( delay, [this, handle=event.getConnectionHandle()]{ _gap.disconnect(handle, ble::local_disconnection_reason_t::USER_TERMINATION); } ); } /** This is called by Gap to notify the application we disconnected, * in our case it calls next_demo_mode() to progress the demo */ void onDisconnectionComplete(const ble::DisconnectionCompleteEvent &event) override { printf("Disconnected\r\n"); /* if it wasn't us disconnecting then we should cancel our attempt */ if (event.getReason() == ble::disconnection_reason_t::REMOTE_USER_TERMINATED_CONNECTION) { _event_queue.cancel(_cancel_handle); } if (_is_in_scanning_phase) { _event_queue.call(this, &GapDemo::end_scanning_mode); } else { _event_queue.call(this, &GapDemo::end_advertising_mode); } } /** * Implementation of Gap::EventHandler::onReadPhy */ void onReadPhy( ble_error_t error, ble::connection_handle_t connectionHandle, ble::phy_t txPhy, ble::phy_t rxPhy ) override { */ void onPhyUpdateComplete( ble_error_t error, ble::connection_handle_t connectionHandle, ble::phy_t txPhy, ble::phy_t rxPhy ) override {) ); } } /** * Implementation of Gap::EventHandler::onDataLengthChange */ void onDataLengthChange( ble::connection_handle_t connectionHandle, uint16_t txSize, uint16_t rxSize ) override { printf( "Data length changed on the connection %d.\r\n" "Maximum sizes for over the air packets are:\r\n" "%d octets for transmit and %d octets for receive.\r\n", connectionHandle, txSize, rxSize ); } private: /** Finish the mode by shutting down advertising or scanning and move to the next mode. */ void end_scanning_mode() { print_scanning_performance(); ble_error_t error = _gap.stopScan(); if (error) { print_error(error, "Error caused by Gap::stopScan"); } _is_in_scanning_phase = false; _scan_count = 0; _event_queue.call_in(delay, this, &GapDemo::advertise); } void end_advertising_mode() { print_advertising_performance(); printf("Requesting stop advertising.\r\n"); _gap.stopAdvertising(ble::LEGACY_ADVERTISING_HANDLE); #if BLE_FEATURE_EXTENDED_ADVERTISING if (_extended_adv_handle != ble::INVALID_ADVERTISING_HANDLE) { /* if it's still active, stop it */ if (_gap.isAdvertisingActive(_extended_adv_handle)) { ble_error_t error = _gap.stopAdvertising(_extended_adv_handle); if (error) { print_error(error, "Error caused by Gap::stopAdvertising"); } } } /* we have to wait before we destroy it until it's stopped */ #else _is_in_scanning_phase = true; _event_queue.call_in(delay, [this]{ scan(); }); #endif // BLE_FEATURE_EXTENDED_ADVERTISING } /** print some information about our radio activity */ void print_scanning::scan_interval_t(ble::millisecond_t(duration_ms)).value(); uint16_t interval_ts = scan_params.get1mPhyConfiguration().getInterval().value(); uint16_t window_ts = scan_params.get1mPhyConfiguration().getWindow()::adv_interval_t(ble::millisecond_t(duration_ms)).value(); uint16_t interval_ts = advertising_params.getMaxPrimaryInterval().value(); /* this is how many times we advertised */ uint16_t events = (duration_ts / interval_ts); uint16_t extended_events = 0; #if BLE_FEATURE_EXTENDED_ADVERTISING if (_extended_adv_handle != ble::INVALID_ADVERTISING_HANDLE) { duration_ts = ble::adv_interval_t(ble::millisecond_t(duration_ms)).value(); interval_ts = extended_advertising_params.getMaxPrimaryInterval().value(); /* this is how many times we advertised */ extended_events = (duration_ts / interval_ts); } #endif // BLE_FEATURE_EXTENDED_ADVERTISING printf("We have advertised for %dms\r\n", duration_ms); /* non-scannable and non-connectable advertising * skips rx events saving on power consumption */ if (advertising_params.getType() == ble::advertising_type_t::NON_CONNECTABLE_UNDIRECTED) { printf("We created at least %d tx events\r\n", events); } else { printf("We created at least %d tx and rx events\r\n", events); } #if BLE_FEATURE_EXTENDED_ADVERTISING if (extended_events) { if (extended_advertising_params.getType() == ble::advertising_type_t::NON_CONNECTABLE_UNDIRECTED) { printf("We created at least %d tx events with extended advertising\r\n", extended_events); } else { printf("We created at least %d tx and rx events with extended advertising\r\n", extended_events); } } #endif // BLE_FEATURE_EXTENDED_ADVERTISING } private: BLE &_ble; ble::Gap &_gap; events::EventQueue &_event_queue; /* Keep track of our progress through demo modes */ bool _is_in_scanning_phase = false; bool _is_connecting = false; /* Remember the call id of the function on _event_queue * so we can cancel it if we need to end the phase early */ int _cancel_handle = 0; /* Measure performance of our advertising/scanning */ Timer _demo_duration; size_t _scan_count = 0; #if BLE_FEATURE_EXTENDED_ADVERTISING ble::advertising_handle_t _extended_adv_handle = ble::INVALID_ADVERTISING_HANDLE; #endif // BLE_FEATURE_EXTENDED_ADVERTISING }; /** Schedule processing of events from the BLE middleware in the event queue. */ void schedule_ble_events(BLE::OnEventsToProcessCallbackContext *context) { event_queue.call(Callback<void()>(&context->ble, &BLE::processEvents)); } int main() { mbed_trace_init(); BLE &ble = BLE::Instance(); /* this will inform us off all events so we can schedule their handling * using our event queue */ ble.onEventsToProcess(schedule_ble_events); GapDemo demo(ble, event_queue); demo.run(); return 0; }
|
https://os.mbed.com/docs/mbed-os/v6.15/apis/gap.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Python Requests Integration
The ScrapeOps Python Requests SDK is an extension for your scrapers that gives you all the scraping monitoring, statistics, alerting, and data validation you will need straight out of the box.
To start using it, you just need to initialize the
ScrapeOpsRequests logger in your scraper and use the ScrapeOps
RequestsWrapper instead of the normal Python Requests library.
The ScrapeOps
RequestsWrapper is just a wrapper around the standard Python Requests library so all functionality (HTTP requests, Sessions, HTTPAdapter, etc.) will work as normal and return the stanard requests response object.
Once integrated, the ScrapeOpsRequests logger will automatically monitor your scrapers and send your logs to your scraping dashboard.
🚀 Getting Setup
You can get the ScrapeOps monitoring suite up and running in 4 easy steps.
#1 - Install the ScrapeOps Python Requests SDK:
pip install scrapeops-python-requests
#2 - Import & Initialize the ScrapeOps logger:
Import then initialize the
ScrapeOpsRequests logger at the top of your scraper and add your API key.
## myscraper.py
from scrapeops_python_requests.scrapeops_requests import ScrapeOpsRequests
scrapeops_logger = ScrapeOpsRequests(
scrapeops_api_key='API_KEY_HERE',
spider_name='SPIDER_NAME_HERE',
job_name='JOB_NAME_HERE',
)
Here, you need to include your free ScrapeOps API Key, which you can get for free here.
You also have the option of giving your scraper a:
- Spider Name: This should be the name of your scraper, and can be reused by multiple jobs scraping different pages on a website. When not defined, it will default to the filename of your scraper.
- Job Name: This should be used if the same spider is being used for multiple different jobs so you can compare the stats of similar jobs historically. Example would be a spider scraping a eCommerce store, but have multiple jobs using the same scraper to scrape different products on the website (i.e. Books, Electronics, Fashion). When not defined, the job name will default to the spider name.
#3 - Initialize the ScrapeOps Python Requests Wrapper
The last step is to just override the standard python requests with the ScrapeOps RequestsWrapper.
Our wrapper uses the standard Python Request library but just provides a way for us to monitor the requests as they happen.
Please only initialize the requests wrapper once near the top of your code.
requests = scrapeops_logger.RequestsWrapper()
#4 - Log Scraped Items:
With the ScrapeOpsRequests logger you can also log the data you scrape as items using the
item_scraped method.
## Log Scraped Item
scrapeops_logger.item_scraped(
response=response,
item={'demo': 'test'}
)
Using
item_scraped the logger will log that an item has been scraped and calculate the data coverage so you can see in your dashboard if your scraper is missing some fields.
Example Scraper:
Here is a simple example so you can see how you can add it to an existing project.
from scrapeops_python_requests.scrapeops_requests import ScrapeOpsRequests
## Initialize the ScrapeOps Logger
scrapeops_logger = ScrapeOpsRequests(
scrapeops_api_key='API_KEY_HERE',
spider_name='QuotesSpider',
job_name='Job1',
)
## Initialize the ScrapeOps Python Requests Wrapper
requests = scrapeops_logger.RequestsWrapper()
urls = [
'',
'',
'',
'',
'',
]
for url in urls:
response = requests.get(url)
item = {'test': 'hello'}
## Log Scraped Item
scrapeops_logger.item_scraped(
response=response,
item=item
)
Done!
That's all. From here, the ScrapeOps SDK will automatically monitor and collect statistics from your scraping jobs and display them in your ScrapeOps dashboard.
|
https://scrapeops.io/docs/monitoring/python-requests/sdk-integration/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
In C#, there are symbols which tell the compiler to perform certain operations. These symbols are known as operators. For example, (+) is an operator which is used for adding two numbers similar to addition in Maths.
C# provides different types of operators:
- Arithmetic Operators
- Relational Operators
- Increment and Decrement Operators
- Logical Operators
- Assignment Operators
C# Arithmetic Operators
Arithmetic Operators are the type of operators which take numerical values as their operands and return a single numerical value.
Let's assume the values of a and b to be 8 and 4 respectively.
using System; class Test { static void Main(string[] args) { int a=10, b=2; Console.WriteLine("a+b = " + (a+b)); Console.WriteLine("a-b = " + (a-b)); Console.WriteLine("a*b = " + (a*b)); Console.WriteLine("a/b = " + (a/b)); } }
int a = 10, b = 2 → We are declaring two integer variables a and b and assigning them the values 10 and 2 respectively.
Inside
Console.WriteLine,
"a+b = " is written inside (" "), so it got printed as it is without evaluation. Then
(a+b) after
+ is evaluated (i.e., 12) and printed.
So,
"a+b = " combined with the calculated value of
a+b i.e. (10+2 i.e. 12) and
a+b = 12 got printed.
Let's take one more example.
using System; class Test { static void Main(string[] args) { int a=10, b=2; int z = a+b; Console.WriteLine("a+b = " + z); } }
In this example, z is assigned the value of
a+b i.e., 12. Inside
WriteLine,
a+b = is inside " ", so it got printed as it is (without evaluation) and then the value of z i.e., 12 got printed. So,
a+b = 12 got printed.
When we divide two integers, the result is an integer. For example, 7/3 = 2 (not 2.33333).
To get the exact decimal value of the answer, at least one of numerator or denominator should have a decimal value.
For example,
7/3.0,
7.0/3 and
7.0/3.0 return 2.33333 because at least one of the operands in each case has a decimal value.
using System; class Test { static void Main(string[] args) { Console.WriteLine("3/2 = " + (3/2)); Console.WriteLine("3/2.0 = " + (3/2.0)); Console.WriteLine("3.0/2 = " + (3.0/2)); Console.WriteLine("3.0/2.0 = " + (3.0/2.0)); } }
As we have seen that
3/2 (both
int) is giving 1 whereas
3.0/2 or
3/2.0 or
3.0/2.0 (at least one is double) is giving us 1.5.
Suppose we are using two integers in our program and we got a need to get a double result after division, we can easily convert them to double during the time of division using explicit conversion. For example,
(double)a/b.
Let's look at an example.
using System; class Test { static void Main(string[] args) { int a = 5; int b = 2; Console.WriteLine((double)a/b); } }
We casted the variable a to double during the division (
(double)a/b) and we got a double result (2.5).
C# Relational and Equality Operators
Relational Operators check the relationship (comparison) between two operands. It returns
true if the relationship is true and
false if it is false.
Following is the list of relational operators in C#.
Again, assume the value of a to be 8 and that of b to be 4.
Let's see an example to understand the use of these operators.
using System; class Test { static void Main(string[] args) { int a = 5; int b = 4; Console.WriteLine(a == b); Console.WriteLine(a != b); Console.WriteLine(a > b); Console.WriteLine(a < b); Console.WriteLine(a >= b); Console.WriteLine(a <= b); } }
In the above example, the value of a is not equal to b, therefore
a == b (equal to) returned false and
a !=b (not equal to) returned true.
Since the value of a is greater than b, therefore
a > b (greater than) and
a >= b (greater than or equal to) returned true whereas
a < b (less than) and
a <= b (less than or equal to) returned false.
C# Difference between = and ==
Although = and == seem to be the same, but they are quite different from each other.
= is an assignment operator while
== is an equality operator.
= assign values from its right side operands to its left side operands whereas
== compares values and check if the values of two operands are equal or not.
Take two examples.
x = 5
x == 5
By writing
x = 5, we assigned a value 5 to x, whereas by writing
x == 5, we checked if the value of x is 5 or not.
C# Logical Operators
In C#, if we write
A and B, then the expression is true if both A and B are true. Whereas, if we write
A or B, then the expression is true if either A or B or both are true.
A and B → Both A and B
A or B → Either A or B or both.
The symbol for AND is
&& while that of OR is
||.
Assume the value of a and b to be true.
In Logical AND (&&) operator, if anyone of the expression is false, the condition becomes false. Therefore, for the condition to become true, both the expressions must be true.
For example,
(3>2)&&(5>4) returns true because both the expressions (3>2 as well as 5>4) are true. Conditions
(3>2)&&(5<4),
(3<2)&&(5>4) and
(3<2)&&(5<4) are false because at least one of the expressions is false in each case.
For Logical OR (||) operator, the condition is only false when both the expressions are false. If any one expression is true, the condition returns true. Therefore,
(3<2)||(5<4) returns false whereas
(3>2)||(5<4),
(3<2)||(5>4) and
(3>2)||(5>4) returns true.
Not (!) operator converts true to false and vice versa. For example,
!(4<7) is true because the expression (4<7) is false and the
! operator makes it true.
using System; class Test { static void Main(string[] args) { int a = 5, b = 0; Console.WriteLine("(a>b) && (b==a) = " + ((a>b) && (b==a))); Console.WriteLine("(a>b) || (b==a) = " + ((a>b) || (b==a))); Console.WriteLine("!(a > b) = " + !(a > b)); } }
In the expression
(a>b) && (b==a), since
b==a is false, therefore the condition became false. Since
a>b is true, therefore the expression
(a || b) became true. The expression
a>b is true (since the value of a is greater than b) and thus the expression
!(a>b) became false.
Before going further to learn about more different operators, let's look at the precedence of operators i.e., which operator should be evaluated first if there are more than one operators in an expression like
2/3+4*6.
C# Precedence of Operators
If we have written more than one operation in one line, then which operation should be done first is governed by the following rules :- Expressions inside brackets '()' are evaluated first. After that, this table is followed (The operator at the top has higher precedence and that at the bottom has the least precedence). If two operators have the same precedence, then the evaluation will be done in the direction stated in the table.
Let's consider an expression
n = 4 * 8 + 7
Since the priority order of multiplication operator ( * ) is greater than that of addition operator ( + ), so first 4 will get multiplied with 8 and after that 7 will be added to the product.
Suppose two operators have the same priority order in an expression, then the evaluation will start from left or right as shown in the above table. For example, take the following expression:
10 / 5 + 2 * 3 -8
Since the priorities of / and * are greater than those of + and -, therefore / and * will be evaluated first. But / and * have the same priority order, so these will be evaluated from left to right (as stated in the table) simplifying to the following expression.
2 + 2 * 3 - 8
After /, * will be evaluated resulting in the following expression:
2 + 6 - 8
Again + and - have the same precedence, therefore these will also be evaluated from left to right i.e., first 2 and 6 will be added after which 8 will be subtracted resulting in 0.
C# Assignment Operators
Assignment Operators are used to assign values from its right side operands to its left side operands. The most common assignment operator is
=.
a = 10 means that we are assigning a value 10 to the variable a.
10 = ais invalid because we can't change the value of 10 to a.
There are more assignment operators which are listed in the following table.
Before going further, let's have a look at an example:
using System; class Test { static void Main(string[] args) { int a = 7; a = a+1; Console.WriteLine(a); a = a-1; Console.WriteLine(a); } }
'=' operator starts from the right. eg.- if a is 4 and b is 5, then a = b will change a to 5 and b will remain 5.
a = a+b → Since '+' has a higher priority than '=', so, a+b will be calculated first. After this, '=' will assign the value of the sum
a+b to a.
In the exact same fashion, in
a = a+1,
a+1 will be calculated first since
+ has higher priority than
=. Now, the expression will become
a = 8 making the value of a equal to 8.
Similarly,
a = a-1 will make the value of a equal to 7 again.
Let's look at an example where different assignment operators are used.
using System; class Test { static void Main(string[] args) { int a = 7; Console.WriteLine("a += 4 Value of a: " + (a += 4)); Console.WriteLine("a -= 4 Value of a: " + (a -= 4)); Console.WriteLine("a *= 4 Value of a: " + (a *= 4)); Console.WriteLine("a /= 4 Value of a: " + (a /= 4)); Console.WriteLine("a %= 4 Value of a: " + (a %= 4)); } }
To understand this, consider the value of a variable n is 5. Now if we write
n += 2, the expression gets evaluated as
n = n+2 thus making the value of n 7 (n = 5 + 2).
In the above example, initially, the value of a is 7.
The expression
a = a+4 thus making the value of a as 11. After this, the expression
a -= 4 gets evaluated as
a = a-4 thus subtracting 4 from the current value of a (i.e. 11) and making it 7 again. Similarly, other expressions will get evaluated.
C# Increment and Decrement Operators
++ and -- are called increment and decrement operators respectively.
++ adds 1 to the operand whereas
-- subtracts 1 from the operand.
a++ increases the value of a variable a by 1 and
a-- decreases the value of a by 1.
Similarly,
++a increases the value of a by 1 and
--a decreases the value of a by 1.
In
a++ and
a--,
++ and
-- are used as postfix whereas in
++a and
--a,
++ and
-- are used as prefix.
For example, suppose the value of a is 5, then
a++ and
++a changes the value of a to 6. Similarly,
a-- and
--a changes the value of a to 4.
C# Difference between Prefix and Postfix
While both
a++ and
++a increases the value of 'a', the only difference between these is that
a++ returns the value of a before the value of a is incremented and
++a first increases the value of a by 1 and then returns the incremented value of a.
Similarly,
a-- first returns the value of a and then decreases its value by 1 and
--a first decreases the value of a by 1 and then returns the decreased value.
Let's look at the example given below.
using System; class Test { static void Main(string[] args) { int a=8, b=8, c=8, d=8; Console.WriteLine("a++: Value of a: " + (a++)); Console.WriteLine("++b: Value of a: " + (++b)); Console.WriteLine("c--: Value of a: " + (c--)); Console.WriteLine("--d: Value of a: " + (--d)); } }
In
a++, postfix increment operator is used with a which first printed the current value of a (8) and then incremented it to 9.
Similarly in
++b, the prefix operator first added one to the current value of b thus making it 9 and then printed the incremented value. The same will be followed for the decremented operators.
C# sizeof
sizeof() operator is used to get the size of data type. We can use the
sizeof operator to get the size of
int as 4. Let's look at the example given below.
using System; class Test { static void Main(string[] args) { Console.WriteLine("size of int : " + sizeof(int)); Console.WriteLine("size of long : " + sizeof(long)); Console.WriteLine("size of unsigned int : " + sizeof(uint)); Console.WriteLine("size of boolean : " + sizeof(bool)); Console.WriteLine("size of short : " + sizeof(short)); Console.WriteLine("size of unsigned short : " + sizeof(ushort)); Console.WriteLine("size of double : " + sizeof(double)); Console.WriteLine("size of char: " + sizeof(char)); } }
C# typeof
typeof() operator is used to get the type of a data type. We can use the
typeof operator to get the type of
int as System.Int32. Let's look at the example given below.
using System; class Test { static void Main(string[] args) { Console.WriteLine("size of int : " + typeof(int)); Console.WriteLine("size of long : " + typeof(long)); Console.WriteLine("size of unsigned int : " + typeof(uint)); Console.WriteLine("size of boolean : " + typeof(bool)); Console.WriteLine("size of short : " + typeof(short)); Console.WriteLine("size of unsigned short : " + typeof(ushort)); Console.WriteLine("size of double : " + typeof(double)); Console.WriteLine("size of char: " + typeof(char)); } }
C# Math Class
What if you want to take sine, cosine or log of a number? Yes, we can perform such mathematical operations in C# by using the Math class inside System namespace in C#. It contains many useful mathematical functions. Let's have a look at some important methods of the Math class.
Let's look at an example using some of these functions.
using System; class Test { static void Main(string[] args) { Console.WriteLine(Math.Sin(Math.PI)); Console.WriteLine(Math.Cos(Math.PI)); Console.WriteLine(Math.Abs(-1)); Console.WriteLine(Math.Floor(3.4)); Console.WriteLine(Math.Ceiling(3.4)); Console.WriteLine(Math.Pow(4, 2)); Console.WriteLine(Math.Log10(100)); Console.WriteLine(Math.Sqrt(4)); } }
With this chapter, we have covered all the basics required to enter the real programming part. From the next chapter, you will see a new part of programming.
|
https://www.codesdope.com/course/c-sharp-operators/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Menu Close
Settings Close
Language and Page Formatting Options
About
Read more about the product including the architecture, components, and quick start guide.
Abstract
Chapter 1. Welcome to Red Hat Advanced Cluster Management for Kubernetes
Kubernetes provides a platform for deploying and managing containers in a standard, consistent control plane. However, as application workloads move from development to production, they often require multiple fit-for-purpose Kubernetes clusters to support DevOps pipelines.
Note: Use of this Red Hat product requires licensing and subscription agreement.
Users, such as administrators and site reliability engineers, face challenges as they work across a range of environments, including multiple data centers, private clouds, and public clouds that run Kubernetes clusters. Red Hat Advanced Cluster Management for Kubernetes provides the tools and capabilities to address these common challenges.
Red Hat Advanced Cluster Management for Kubernetes provides end-to-end management visibility and control to manage your Kubernetes environment. Take control of your application modernization program with management capabilities for cluster creation, application lifecycle, and provide security and compliance for all of them across data centers and hybrid cloud environments. Clusters and applications are all visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet.
See the following image of the Welcome page from the Red Hat Advanced Cluster Management for Kubernetes console. The header displays the Applications icon to return to OpenShift Container Platform, access to the Visual Web Terminal, and more. The tiles describe the main functions of the product and link to important console pages.
With Red Hat Advanced Cluster Management for Kubernetes:
- Work across a range of environments, including multiple data centers, private clouds and public clouds that run Kubernetes clusters.
- Easily create Kubernetes clusters and offer cluster lifecycle management in a single console.
- Enforce policies at the target clusters using Kubernetes-supported custom resource definitions.
- Deploy and maintain day-two operations of business applications distributed across your cluster landscape.
This guide assumes that users are familiar with Kubernetes concepts and terminology. For more information about Kubernetes concepts, see Kubernetes Documentation.
See the following documentation for information about the product:
1.1. Multicluster architecture
Red Hat Advanced Cluster Management for Kubernetes consists of several multicluster components, which are used to access and manage your clusters. Learn more about the architecture in the following sections, then follow the links to more detailed documentation.
Learn more about the following components for Red Hat Advanced Cluster Management for Kubernetes:
1.1.1. Hub cluster
The hub cluster is the common term that is used to define the central controller that runs in a Red Hat Advanced Cluster Management for Kubernetes cluster. From the hub cluster, you can access the console and product components, as well as the Red Hat Advanced Cluster Management APIs.
From the hub cluster, you can use the console to search resources across clusters and view your topology. The Visual Web Terminal provides an interface that merges the speed of a CLI with the convenience of an interactive table with direct linking like a graphical user interface. This enables you to use the Visual Web Terminal to run many commands, like
oc and
kubectl commands, and run searches across your managed clusters. You can then explore the results from the Visual Web Terminal searches in a selectable table format.
Additionally, you can enable observability on your hub cluster to monitor metrics from your managed clusters across your cloud providers.
The hub cluster aggregates information from multiple clusters by using an asynchronous work request model and search collectors. With a graph database, the hub cluster maintains the state of clusters and applications that run on it.
1.1.2. Managed cluster
The managed cluster is the term that is used to define additional clusters that are managed by the hub cluster. The connection between the two is completed by using the Klusterlet, which is the agent that is installed on the managed cluster. The managed cluster receives and applies requests from the hub cluster and enables it to service cluster lifecycle, application lifecycle, governanace and risk, and observability on the managed cluster.
For example, managed clusters send metrics to the hub cluster if the observability service is enabled. See Observing environments to receive metrics and optimize the health of all managed clusters.
1.1.3. Cluster lifecycle
Red Hat Advanced Cluster Management cluster lifecycle defines the process of creating, importing, and managing Kubernetes clusters across various public cloud providers, private clouds, and on-premises datacenters.
From the hub cluster console, you can view an aggregation of all cluster health statuses, or view individual health metrics of many Kubernetes clusters. Additionally, you can upgrade managed OpenShift Container Platform clusters individually or in bulk, as well as destroy any OpenShift Container Platform clusters that you created using your hub cluster.
See Managing your clusters to learn about managing clusters, which is part of Cluster lifecycle.
1.1.4. Application lifecycle
Red Hat Advanced Cluster Management Application lifecycle defines the processes that are used to manage application resources on your managed clusters. A multicluster application allows you to deploy resources on multiple managed clusters, as well as maintain full control of Kubernetes resource updates for all aspects of the application with high availability.
A multicluster application uses the Kubernetes specification, but provides additional automation of the deployment and lifecycle management of resources. As a technology preview function, the integration of Ansible Tower jobs enables scheduling automated tasks.
See Managing applications for more application topics.
1.1.5. Governance and risk
Governance and risk enables you to define policies that either enforce security compliance, or alert you of changes that violate the configured compliance requirements for your environment. You can manage the policies and compliance requirements across all of your management clusters from a central interface page. After you configure a Red Hat Advanced Cluster Management hub cluster and a managed cluster, you can view and create policies with the Red Hat Advanced Cluster Management policy framework. You can take advantage of the
policy-collection community to see what policies community members created and contributed, as well as contribute your own policies for others to use.
For more information about Governance and risk, see the Security introduction. Additionally, learn about access requirements from the Role-based access control documentation.
1.1.6. Observability
The Observability component collects and reports the status and health of the OpenShift Container Platform version 4.x, or later, managed clusters to the hub cluster. You can create custom alerts to inform you of problems with your fleet of managed clusters. Because it requires configured persistent storage, observability must be enabled after the Red Hat Advanced Cluster Management installation.
For more information about Observability, see Observing environments introduction.
See the product Installing section to prepare your cluster and get configuration information.
1.2. Getting started
1.2.1. Introduction
See the product architecture at Multicluster architecture.
After you learn about the hub cluster and managed cluster architecture, learn about the Supported clouds, which lists the cloud provider cluster options.
The hub cluster is a Red Hat OpenShift Container Platform cluster version 4.5, 4.6, or 4.7 and can run on any supported Red Hat OpenShift Container Platform infrastructure.
The Glossary of terms defines common terms for the product.
If you experience problems, see the Troubleshooting guide to learn about the
must-gather command and see documented troubleshooting tasks that might help resolve issues.
1.2.2. Install
- Before you install Red Hat Advanced Cluster Management for Kubernetes, review the system configuration requirements and settings at Requirements and recommendations. Get information about required operating systems and supported browsers. For instance, you want to ensure that you have a supported Red Hat OpenShift Container Platform version so that you can set up your hub cluster.
- You also need to ensure that your hub cluster has the appropriate capacity. To prepare your hub cluster, see Preparing your hub cluster for installation.
- With a supported version of OpenShift Container Platform installed and running on your hub cluster, you can proceed with Installing while connected online.
After installation, review the Web console guide to learn how to access your console and what features are available in the console.
1.2.3. Manage clusters
You are now ready to create and import clusters. From your hub cluster, you can create clusters from other Kubernetes services to manage, and you can view cluster information.
- See Creating a cluster to learn about the types of managed clusters you can create. When you create a managed cluster, the new managed cluster imports automatically.
- If you have a cluster that you want to import manually, you can view Importing a target managed cluster to the hub cluster to learn how to import a managed cluster.
- When you no longer need to manage a cluster, you can detach that cluster from the Cluster page.
1.2.4. Manage applications
You can start managing applications on any created and imported managed clusters. The types of resources that you can create are applications, channels, subscriptions, and placement rules.
- Learn more about the resources and how to create and manage them at Managing applications. Add or edit your
.yamlfile to create your resources.
- View and edit your resources from the Applications Dashboard.
1.2.5. Manage security
You can also manage security and compliance across your created and imported managed clusters.
- Create a policy using the policy templates. See the Policy overview for details about how to create a policy with a
.yamlfile template.
- From the Policies page, you can view a summary of cluster and policy violations.
- View your policies from the Governance and risk page in the console. You can also view policy details from the cluster Overview.
1.2.6. Observe clusters
You can enable the observability service to gain insight and optimize your managed clusters. Enable the observability service operator (
multicluster-observability-operator) to monitor the health of your managed clusters.
- Learn more about Observing environments and how to Enable observability service.
1.3. Glossary of terms
Red Hat Advanced Cluster Management for Kubernetes consists of several multicluster components that are defined in the following sections. Additionally, some common Kubernetes terms are used within the product. Terms are listed alphabetically.
1.3.1. Relevant standardized glossaries
1.3.2. Red Hat Advanced Cluster Management for Kubernetes terms
1.3.2.1. Application lifecycle
The processes that are used to manage application resources on your managed clusters. A multicluster application uses a Kubernetes specification, but with additional automation of the deployment and lifecycle management of resources to individual clusters.
1.3.2.2. Channel
A custom resource definition that points to repositories where Kubernetes resources are stored, such as Git repositories, Helm chart repositories, ObjectStore repositories, or namespaces templates on the hub cluster. Channels support multiple subscriptions from multiple targets.
1.3.2.3. Cluster lifecycle
Defines the process of creating, importing, and managing clusters across public and private clouds.
1.3.2.4. Console
The graphical user interface for Red Hat Advanced Cluster Management for Kubernetes.
1.3.2.5. Deployable
A resource that retrieves the output of a build, packages the output with configuration properties, and installs the package in a pre-defined location so that it can be tested or run.
1.3.2.6. Governance and risk
The Red Hat Advanced Cluster Management for Kubernetes processes used to manage security and compliance.
1.3.2.7. Hub cluster
The central controller that runs in a Red Hat Advanced Cluster Management for Kubernetes cluster. From the hub cluster, you can access the console and components found on that console, as well as APIs.
1.3.2.8. Managed cluster
Created and imported clusters are managed by the klusterlet agent and its add-ons, which initiates a connection to the Red Hat Advanced Cluster Management for Kubernetes hub cluster.
1.3.2.9. Klusterlet
The agent that contains two controllers on the managed cluster that initiates a connection to the Red Hat Advanced Cluster Management for Kubernetes hub cluster.
1.3.2.10. Klusterlet add-on
Specialized controller on the Klusterlet that provides additional management capability.
1.3.2.11. Placement policy
A policy that defines where the application components should be deployed and how many replicas there should be.
1.3.2.12. Placement rule
A rule that defines the target clusters where subscriptions are delivered. For instance, verify the cluster name, resource annotations, or resource label(s).
1.3.2.13. Subscriptions
A resource that identifies the Kubernetes resources within channels (resource repositories), then places the Kubernetes resource on the target clusters.
|
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/about/index
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Explore Python Gensim Library For NLP
In this tutorial, we will focus on the Gensim Python library for text analysis.
Gensim is an acronym for Generate Similar. It is a free Python library for natural language processing written by Radim Rehurek which is used in word embeddings, topic modeling, and text similarity. It is developed for generating word and document vectors. It also extracts the topics from textual documents. It is an open-source, scalable, robust, fast, efficient multicore Implementation, and platform-independent.
In this tutorial, we are going to cover the following topics:
Installing Gensim
Gensim is one of the powerful libraries for natural language processing. It will support Bag of Words, TFIDF, Word2Vec, Doc2Vec, and Topic modeling. Let install the library using
pip command:
pip install gensim
Create Gensim Dictionary
In this section, we will start gensim by creating a dictionary object. First, we load the text data file. you can download it from the following link.
# open the text file as an object file = open('hamlet.txt', encoding ='utf-8') # read the file text=file.read()
Now, we tokenize and preprocess the data using the string function split() and simple_preprocess() function available in gensim module.
# Tokenize data: Handling punctuations and lowercasing the text from gensim.utils import simple_preprocess # preprocess the file to get a list of tokens token_list =[] for sentence in text.split('.'): # the simple_preprocess function returns a list of each sentence token_list.append(simple_preprocess(sentence, deacc = True)) print (token_list[:2])
Output:
[['the', 'tragedy', 'of', 'hamlet', 'prince', 'of', 'denmark', 'by', 'william', 'shakespeare', 'dramatis', 'personae', 'claudius', 'king', 'of', 'denmark'], ['marcellus', 'officer']]
In the above code block, we have tokenized and preprocessed the hamlet text data.
- deacc (bool, optional) – Remove accent marks from tokens using deaccent() function.
- The deaccent() function is another utility function, documented at the link, which does exactly what the name and documentation suggest: removes accent marks from letters, so that, for example, ‘é’ becomes just ‘e’.
- simple_preprocess(), as per its documentation, to discard any tokens shorter than min_len=2 characters.
After tokenization and preprocessing, we will create gensim dictionary object for the above-tokenized text.
# Import gensim corpora from gensim import corpora # storing the extracted tokens into the dictionary my_dictionary = corpora.Dictionary(token_list) # print the dictionary print(my_dictionary)
Output:
Dictionary(4593 unique tokens: ['by', 'claudius', 'denmark', 'dramatis', 'hamlet']...)
Here, gensim dictionary stores all the unique tokens.
Now, we will see how to save and load the dictionary object.
# save your dictionary to disk my_dictionary.save('dictionary.dict') # load back load_dict = corpora.Dictionary.load('dictionary.dict') print(load_dict)
Output:
Dictionary(4593 unique tokens: ['by', 'claudius', 'denmark', 'dramatis', 'hamlet']...)
Bag of Words
The Bag-of-words model(BoW ) is the simplest way of extracting features from the text. BoW converts text into the matrix of the occurrence of words within a document. This model concerns whether given words occurred or not in the document.
Let’s create a bag of words using function doc2bow() for each tokenized sentence. Finally, we will have a list of tokens with their frequency.
# Converting to a bag of word corpus BoW_corpus =[my_dictionary.doc2bow(sent, allow_update = True) for sent in token_list] print(BoW_corpus[:2])
[[(0, 1), (1, 1), (2, 2), (3, 1), (4, 1), (5, 1), (6, 3), (7, 1), (8, 1), (9, 1), (10, 1), (11, 1), (12, 1)], [(13, 1), (14, 1)]]
In the above code, we have generated the bag or words. In the output, you can see the index and frequency of each token. If you want to replace the index with a token then you can try the following script:
# Word weight in Bag of Words corpus word_weight =[] for doc in BoW_corpus: for id, freq in doc: word_weight.append([my_dictionary[id], freq]) print(word_weight[:10])
[['by', 1], ['claudius', 1], ['denmark', 2], ['dramatis', 1], ['hamlet', 1], ['king', 1], ['of', 3], ['personae', 1], ['prince', 1], ['shakespeare', 1]]
Here, you can see the list of tokens with their frequency.
TF-IDF
- In Term Frequency(TF), you just count the number of words that occurred in each document. The main issue with this Term Frequency is that it will give more weight to longer documents. Term frequency is basically the output of the BoW model.
- IDF(Inverse Document Frequency) measures the amount of information a given word provides across the document. IDF is the logarithmically scaled inverse ratio of the number of documents that contain the word and the total number of documents.
- TF-IDF(Term Frequency-Inverse Document Frequency) normalizes the document term matrix. It is the product of TF and IDF. Word with high tf-idf in a document, it is most of the time that occurred in given documents and must be absent in the other documents.
Let’s generate the TF-IDF features for the given BoW corpus.
from gensim.models import TfidfModel import numpy as np # create TF-IDF model tfIdf = TfidfModel(BoW_corpus, smartirs ='ntc') # TF-IDF Word Weight weight_tfidf =[] for doc in tfIdf[BoW_corpus]: for id, tf_idf in doc: weight_tfidf.append([my_dictionary[id], np.around(tf_idf, decimals = 3)]) print(weight_tfidf[:10])
Output: [['by', 0.146], ['claudius', 0.31], ['denmark', 0.407], ['dramatis', 0.339], ['hamlet', 0.142], ['king', 0.117], ['of', 0.241], ['personae', 0.339], ['prince', 0.272], ['shakespeare', 0.339]]
Word2Vec.
There are two main methods for woed2vec: Common Bag Of Words (CBOW) and Skip Gram.
Continuous Bag of Words (CBOW) predicts the current word based on four future and four history words. Skip-gram takes the current word as input and predicts the before and after the current word. In both types of methods, the neural network language model (NNLM) is used to train the model. Skip Gram works well with a small amount of data and is found to represent rare words well. On the other hand, CBOW is faster and has better representations for more frequent words.
Let’s implement gensim Word2Vec in python:
# import Word2Vec model from gensim.models import Word2Vec # Create Word2vec object model = Word2Vec(sentences=token_list, # tokenized sentences size=100, window=5, min_count=1, workers=4, sg=0) # CBOW #Save model model.save("word2vec.model") # Load trained Word2Vec model model = Word2Vec.load("word2vec.model") # Generate vector vector = model.wv['think'] # returns numpy array print(vector)
In the above code, we have built the Word2Vec model using Gensim. Here is the description for all the parameters:
- min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage.
- workers , the last of the major parameters (full list here) is for training parallelization.
- window: Maximum distance between the current and predicted word within a sentence.
- sg: stands for skip-gram; 0- CBOW and 1:skipgram
Output: [-0.27096474 -0.02201273 0.04375215 0.16169178 0.385864 -0.00830234 0.06216158 -0.14317605 0.17866768 0.13853565 -0.05782828 -0.24181016 -0.21526945 -0.34448552 -0.03946546 0.25111085 0.03826794 -0.31459117 0.05657561 -0.10587984 0.0904238 -0.1054946 -0.30354315 -0.12670684 -0.07937846 -0.09390186 0.01288407 -0.14465155 0.00734721 0.21977565 0.09089493 0.27880424 -0.12895903 0.03735492 -0.36632115 0.07415111 0.10245194 -0.25479802 0.04779665 -0.06959599 0.05201627 -0.08305986 -0.00901385 0.01109841 0.03884205 0.2771041 -0.17801927 -0.17918047 0.1551789 -0.04730623 -0.15239601 0.09148847 -0.16169599 0.07088429 -0.07817879 0.19048482 0.2557149 -0.2415944 0.17011274 0.11839501 0.1798175 0.05671703 0.03197689 0.27572715 -0.02063731 -0.04384637 -0.08028547 0.08083986 -0.3160063 -0.01283481 0.24992462 -0.04269576 -0.03815364 0.08519065 0.02496272 -0.07471556 0.17814435 0.1060199 -0.00525795 -0.08447327 0.09727245 0.01954588 0.055328 0.04693184 -0.04976451 -0.15165417 -0.19015886 0.16772328 0.02999189 -0.05189768 -0.0589773 0.07928728 -0.29813886 0.05149718 -0.14381753 -0.15011951 0.1745079 -0.14101334 -0.20089763 -0.13244842]
Pretrained Word2Vec: Google’s Word2Vec, Standford’s Glove and Fasttext
- Google’s Word2Vec treats each word in the corpus like an atomic entity and generates a vector for each word. In this sense Word2vec is very much like Glove – both treat words as the smallest unit to train on.
- Google’s Word2Vec is a “predictive” model that predicts context given word, GLOVE learns by factorizing a co-occurrence matrix.
- Fasttext treats each word as composed of character ngrams. So the vector for a word is made of the sum of this character n-grams.
- Google’s Word2vec and GLOVE do not solve the problem of out-of-vocabulary words (or generalization to unknown words) but Fasttext can solve this problem because it’s based on each character n grams.
- The biggest benefit of using FastText is that it generate better word embeddings for rare words, or even words not seen during training because the n-gram character vectors are shared with other words.
Google’s Word2Vec
In this section, we will see how Google’s pre-trained Word2Vec model can be used in Python. We are using here gensim package for an interface to word2vec. This model is trained on the vocabulary of 3 million words and phrases from 100 billion words of the Google News dataset. The vector length for each word is 300. You can download Google’s pre-trained model here.
Let’s load Google’s pre-trained model and print the shape of the vector:
from gensim.models.word2vec import Word2Vec from gensim.models import KeyedVectors model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) print(model.wv['reforms'].shape)
Output: (300,)
Standford Glove
GloVe stands for Global Vectors for Word Representation. It is an unsupervised learning algorithm for generating vector representations for words. You can read more about Glove in this research paper. It is a new global log-bilinear regression model for the unsupervised learning of word representations. You can use the following list of models trained on the Twitter dataset:
- glove-twitter-25 (104 MB)
- glove-twitter-50 (199 MB)
- glove-twitter-100 (387 MB)
- glove-twitter-200 (758 MB)
import gensim.downloader as api # Download the model and return as object ready for use model_glove_twitter = api.load("glove-twitter-25") # Print shape of the vector print(model_glove_twitter['reforms'].shape) # Print vector for word 'reform' print(model_glove_twitter['reforms'])
Output: (25,) [ 0.37207 0.91542 -1.6257 -0.15803 0.38455 -1.3252 -0.74057 -2.095 1.0401 -0.0027519 0.33633 -0.085222 -2.1703 0.91529 0.77599 -0.87018 -0.97346 0.68114 0.71777 -0.99392 0.028837 0.24823 -0.50573 -0.44954 -0.52987 ]
# get similar items model_glove_twitter.most_similar("policies",topn=10)
Output: [('policy', 0.9484813213348389), ('reforms', 0.9403933882713318), ('laws', 0.94012051820755), ('government', 0.9230710864067078), ('regulations', 0.9168934226036072), ('economy', 0.9110006093978882), ('immigration', 0.9105909466743469), ('legislation', 0.9089651107788086), ('govt', 0.9054746627807617), ('regulation', 0.9050778746604919)]
Facebook FastText
FastText is an improvement in the Word2Vec model that is proposed by Facebook in 2016. FastText spits the words into n-gram characters instead of using the individual word. It uses the Neural Network to train the model. The core advantage of this technique is that can easily represent the rare words because some of their n-grams may also appear in other trained words. Let’s see how to use FastText with Gensim in the following section.
# Import FastText from gensim.models import FastText # Create FastText Model object model = FastText(size=50, window=3, min_count=1) # instantiate # Build Vocab model.build_vocab(token_list) # Train FastText model model.train(token_list, total_examples=len(token_list), epochs=10) # train model.wv['policy']
Output: array([-0.328225 , 0.2092654 , 0.09407859, -0.08436475, -0.18087168, -0.19953477, -0.3864786 , 0.08250062, 0.08613443, -0.14634965, 0.18207662, 0.20164935, 0.32687476, 0.05913997, -0.04142053, 0.01215196, 0.07229924, -0.3253025 , -0.15895212, 0.07037129, -0.02852136, 0.01954574, -0.04170248, -0.08522341, 0.06419735, -0.16668107, 0.11975338, -0.00493952, 0.0261423 , -0.07769344, -0.20510232, -0.05951802, -0.3080587 , -0.13712431, 0.18453395, 0.06305533, -0.14400929, -0.07675331, 0.03025392, 0.34340212, -0.10817952, 0.25738955, 0.00591787, -0.04097764, 0.11635819, -0.634932 , -0.367688 , -0.19727138, -0.1194628 , 0.00743668], dtype=float32)
# Finding most similar words model.wv.most_similar('present')
Output: [('presentment', 0.999993622303009), ('presently', 0.9999920725822449), ('moment', 0.9999914169311523), ('presence', 0.9999902248382568), ('sent', 0.999988317489624), ('whose', 0.9999880194664001), ('bent', 0.9999875426292419), ('element', 0.9999874234199524), ('precedent', 0.9999873042106628), ('gent', 0.9999872446060181)]
Doc2Vec
Doc2vec is used to represent documents in the form of a vector. It is based on the generalized approach of the Word2Vec method. In order to deep dive into doc2vec, First, you should understand how to generate word to vectors (word2vec). Doc2Vec is used to predict the next word from numerous sample contexts of the original paragraph. It addresses the semantics of the text.
- In Distributed Memory Model of Paragraph Vectors (PV-DM), paragraph tokens remember the missing context in the paragraph. It is like a continuous bag-of-words (CBOW) version of Word2Vec.
- Distributed Bag of Words version of Paragraph Vector (PV-DBOW) ignores the context of input words and predicts missing words from a random sample. It is like the Skip-Gram version of Word2vec
documents=text.split(".") documents[:5]
from collections import namedtuple # Transform data (you can add more data preprocessing steps) docs = [] analyzedDocument = namedtuple('AnalyzedDocument', 'words tags') for i, text in enumerate(documents): words = text.lower().split() tags = [i] docs.append(analyzedDocument(words, tags)) print(docs[:2])
from gensim.models import doc2vec model = doc2vec.Doc2Vec(docs, vector_size=100, window=5, min_count=1, workers=4, dm=0) # PV-DBOW vector=model.infer_vector(['the', 'tragedy', 'of', 'hamlet,', 'prince', 'of', 'denmark', 'by', 'william', 'shakespeare', 'dramatis', 'personae', 'claudius,', 'king', 'of', 'denmark']) print(vector)
Output: [-1.5818793e-02 1.3085594e-02 -1.1896869e-02 -3.0695410e-03 1.5006907e-03 -1.3316960e-02 -5.6281965e-03 3.1253812e-03 -4.0207659e-03 -9.0181744e-03 1.2115648e-02 -1.2316694e-02 9.3884282e-03 -1.2136344e-02 9.3199247e-03 6.0257949e-03 -1.1087678e-02 -1.6263386e-02 3.0145817e-03 9.2168162e-03 -3.1892660e-03 2.5632046e-03 4.1057081e-03 -1.1103139e-02 -4.4368235e-03 9.3003511e-03 -1.9984354e-05 4.6007405e-03 4.5250896e-03 1.4299035e-02 6.4971978e-03 1.3330076e-02 1.6638277e-02 -8.3673699e-03 1.4617097e-03 -8.7684026e-04 -5.3776056e-04 1.2898060e-02 5.5408065e-04 6.9614425e-03 2.9868495e-03 -1.3385005e-03 -3.4805303e-03 1.0777158e-02 -1.1053825e-02 -8.0987150e-03 3.1651056e-03 -3.6159047e-04 -3.0776947e-03 4.9342304e-03 -1.1290920e-03 -4.8262491e-03 -9.2841331e-03 -1.4540913e-03 -1.0785381e-02 -1.7799810e-02 3.4300602e-04 2.4301475e-03 6.0869306e-03 -4.3078070e-03 2.9106432e-04 1.3333942e-03 -7.1321065e-03 4.3218113e-03 7.5919051e-03 1.7675487e-03 1.9759729e-03 -1.6749580e-03 2.5316922e-03 -7.4808724e-04 -7.0081712e-03 -7.2277770e-03 2.1022926e-03 -7.2621077e-04 1.6523260e-03 7.7043297e-03 4.9248277e-03 9.8303892e-03 4.2252508e-03 3.9137071e-03 -6.4144642e-03 -1.5699258e-03 1.5538614e-02 -1.8792158e-03 -2.2203794e-03 6.2514015e-04 9.6203719e-04 -1.5944529e-02 -1.8801112e-03 -2.8503922e-04 -4.4923062e-03 8.4128296e-03 -2.0803667e-03 1.6383808e-02 -1.6173380e-04 3.9917473e-03 1.2395959e-02 9.2958640e-03 -1.7370760e-03 -4.5007761e-04]
In the above code, we have built the Doc2Vec model using Gensim. Here is the description for all the parameters:
- dm ({1,0}, optional) – Defines the training algorithm. If dm=1, ‘distributed memory’ (PV-DM) is used. Otherwise, distributed bag of words (PV-DBOW) is employed.
- vector_size (int, optional) – Dimensionality of the feature vectors.
- window (int, optional) – The maximum distance between the current and predicted word within a sentence.
- alpha (float, optional) – The initial learning rat
Summary
Congratulations, you have made it to the end of this tutorial!
In this article, we have learned Gensim Dictionary, Bag of Words, TFIDF, Word2Vec, and Doc2Vec. We have also focused on Google’s Word2vec, Standford’s Glove, and Facebook’s FastText. We have performed all the experiments using gensim library. Of course, this is just the beginning, and there’s a lot more that we can do using Gensim in natural language processing. You can check out this article on Topic modeling.
|
https://machinelearninggeek.com/explore-python-gensim-library-for-nlp/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Windows 10 Version 1903, May 2019 Update adds support for non-packaged desktop apps to make use of user-defined (3rd party) Windows Runtime (WinRT) Components. Previously, Windows only supported using 3rd party Windows Runtime components in a packaged app (UWP or Desktop Bridge). Trying to call a user-defined Windows Runtime Component in a non-packaged app would fail because of the absence of package identity in the app, no way to register the component with the System and in turn no way for the OS to find the component at runtime.
The restrictions blocking this application scenario have now been lifted with the introduction of Registration-free WinRT (Reg-free WinRT). Similar to the classic Registration-free COM feature, Reg-Free WinRT activates a component without using a registry mechanism to store and retrieve information about the component. Instead of registering the component during deployment which is the case in packaged apps, you can now declare information about your component’s assemblies and classes in the classic Win32-style application.manifest. At runtime, the information stored in the manifest will direct the activation of the component.
Why use Windows Runtime Components in Desktop Apps
Using Windows Runtime components in your Win32 application gives you access to more of the modern Windows 10 features available through Windows Runtime APIs. This way you can integrate modern experiences in your app that light up for Windows 10 users. A great example is the ability to host UWP controls in your current WPF, Windows Forms and native Win32 desktop applications through UWP XAML Islands.
How Registration-free WinRT Works
The keys to enabling this functionality in non-packaged apps are a newly introduced Windows Runtime activation mechanism and the new ”activatableClass” element in the application manifest. It is a child element of the existing manifest “file” element, and it enables the developer to specify activatable Windows Runtime classes in a dll the application will be making use of. At runtime this directs activation of the component’s classes. Without this information non-packaged apps would have no way to find the component. Below is an example declaration of a dll (WinRTComponent.dll) and the activatable classes (WinRTComponent.Class*) our application is making use of. The “threadingModel” and namespace (“xmlns”) must be specified as shown:
<?xml version="1.0" encoding="utf-8"?> <assembly manifestVersion="1.0" xmlns="urn:schemas-microsoft-com:asm.v1"> <assemblyIdentity version="1.0.0.0" name="MyApplication.app"/> <file name="WinRTComponent.dll"> <activatableClass name="WinRTComponent.Class1" threadingModel="both" xmlns="urn:schemas-microsoft-com:winrt.v1" /> <activatableClass name="WinRTComponent.Class2" threadingModel="both" xmlns="urn:schemas-microsoft-com:winrt.v1" /> </file> </assembly>
The Windows Runtime Component
For our examples we’ll be using a simple C++ Windows Runtime component with a single class (WinRTComponent.Class) that has a string property. In practice you can make use of more sophisticated components containing UWP controls. Some good examples are this UWP XAML Islands sample and these Win2D samples.
Figure 1: C++ Windows Runtime Component
Using A C# Host App
GitHub Sample:
In our first example we’ll look at a non-packaged Windows Forms app (WinFormsApp) which is referencing our C++ Windows Runtime Component (WinRTComponent). Below is an implementation of a button in the app calling the component class and displaying its string in a textbox and popup:
Figure 2: WinForms App Consuming component
All we need to get the code to compile is to add a reference to the WinRTComponent project from our WinForms app – right click the project node | Add | Reference | Projects | WinRTComponent. Adding the reference also ensures every time we build our app, the component is also built to keep track of any new changes in the component.
Although the code compiles, if we try to run the solution, the app will fail. This is because the system has no way of knowing which DLL contains WinRTComponent.Class and where to find the DLL. This is where the application manifest and Registration-free WinRT come in. On the application node right click | Add | New Item | Visual C# | Application Manifest File. The manifest file naming convention is that it must have the same name as our application’s .exe and have the .manifest extension, in this case I named it “WinFormsApp.exe.manifest”. We don’t need most of the text in the template manifest so we can replace it with the DLL and class declarations as shown below:
Figure 3: Application Manifest in WinForms App
Now that we’ve given the system a way of knowing where to find WinRTComponent.Class, we need to make sure the component DLL and all its dependencies are in the same directory as our app’s .exe. To get the component DLL in the correct directory we will use a Post Build Event – right click app project | Properties | Build Events | Post Build Event, and specify a command to copy the component dll from its output directory to the same output directory as the .exe:
copy /Y “$(SolutionDir)WinRTComponent\bin\$(Platform)\$(Configuration)\WinRTComponent.dll” “$(SolutionDir)$(MSBuildProjectName)\$(OutDir)WinRTComponent.dll”
Handling Dependencies
Because our component is built in visual C++, it has a runtime dependency on the C++ Runtime. Windows Runtime components were originally created to only work in packaged applications distributed through the Microsoft Store, as a result, they have a dependency on the ‘Store version’ of the C++ Runtime DLLs, aka the VCLibs framework package. Unfortunately, redistributing the VCLibs framework package outside the Microsoft Store is currently not supported. As a result, we’ve had to come up with an alternate solution to satisfy the framework package dependency in non-packaged applications. We created app-local forwarding DLLs in the ‘form’ of the Store framework package DLLs that forward their function calls to the standard VC++ Runtime Libraries, aka the VCRedist. You can download the forwarding DLLs as the NuGet package Microsoft.VCRTForwarders.140 to resolve the Store framework package dependency.
The combination of the app-Local forwarding DLLs obtained via the NuGet package and the VCRedist allows your non-Store deployed Windows Runtime component to work as if it was deployed through the Store. Since native C++ applications already have a dependency on the VCRedist, the Microsoft.VCRTForwarders.140 NuGet package is a new dependency. For managed applications the NuGet package and the VCRedist are both new dependencies.
The Microsoft.VCRTForwarders.140 NuGet package can be found here:
The VCRedist can be found here:
After adding the Microsoft.VCRTForwarders.140 NuGet package in our app everything should be set, and running our application displays text from our Windows Runtime component:
Figure 4: Running WinForms App
Using A C++ Host App
GitHub Sample:
To successfully reference a C++ Windows Runtime component from a C++ app, you need to use C++/WinRT to generate projection header files of your component. You can then include these header files in your app code to call your component. You can find out more about the C++/WinRT authoring experience here. Making use of a C++ Windows Runtime component in a non-packaged C++ app is very similar to the process we outlined above when using a C# app. However, the main differences are:
- Visual Studio doesn’t allow you to reference the C++ Windows Runtime component from a non-packaged C++ host app.
- You need C++/WinRT generated projection headers of the component in your app code.
Visual Studio doesn’t allow you to reference the Windows Runtime component from a non-packaged C++ app due to the different platforms the projects target. A nifty solution around this is to reference the Component’s WinMD using a property sheet. We need this reference so that C++/WinRT can generate projection header files of the component which we can use in our app code. So the first thing we’ll do to our C++ app is add a property sheet – right-click the project node| Add | New Item | Visual C++ | Property Sheets | Property Sheet (.props)
- Edit the resulting property sheet file (sample property sheet is shown below)
- Select View | Other Windows | Property Manager
- Right-click the project node
- Select Add Existing Property Sheet
- Select the newly created property sheet file
Figure 5: Property Sheet in C++ Host App
This property sheet is doing two things: adding a reference to the component WinMD and copying the component dll to the output directory with our app’s .exe. The copying step is so that we don’t have to create a post build event as we did in the C# app (the component dll needs to be in the same directory as the app’s .exe). If you prefer using the post build event instead, you can skip the copy action specified in the property sheet.
The next step would be to make sure your app has the C++/WinRT NuGet package installed. We need this for the component projection headers. Because Visual Studio doesn’t allow us to directly add a reference to the component, we need to manually build the component whenever we update it, so that we are referencing the latest component bits in our app. When we’ve made sure the component bits are up to date, we can go ahead and then build our app. The C++/WinRT NuGet package will generate a projection header file of the component based on the WinMD reference we added in the app property sheet. If you want to see the header file click on the “All Files” icon in Visual Studio Solution Explorer | Generated Files | winrt | <ComponentName.h>:
Figure 6: C++/WinRT Generated Projections
By including the generated component projection header file (WinRTComponent.h) in our app code we can reference our component code:
Figure 7: C++ App referencing code in WinRTComponent
We then add an application manifest to our app and specify the component DLL and component classes we’re making use of:
Figure 8: Win32 Application Manifest in C++ App
And this is what we get when we build and run the app:
Figure 9: Running C++ Host App
Conclusion
Registration-free WinRT enables you to access more features in the UWP ecosystem by allowing you to use Windows Runtime Components without the requirement to package your application. This makes it easier for you to keep your existing Win32 code investments and enhance your applications by additively taking advantage of modern Windows 10 features. This means you can now take advantage of offerings such as UWP XAML Islands from your non-packaged desktop app. For a detailed look at using UWP XAML Islands in your non-packaged desktop app have a look at these samples: UWP XAML Islands and Win2D. Making use of C++ Windows Runtime components in non-packaged apps comes with the challenge of handling dependencies. While the solutions currently available are not ideal, we aim to make the process easier and more streamlined based on your feedback.
|
https://blogs.windows.com/windowsdeveloper/2019/04/30/enhancing-non-packaged-desktop-apps-using-windows-runtime-components/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Basics and working with Flows
Dependency
To use Akka Streams, add the module to your project:
- sbt
val AkkaVersion = "2.6.20" libraryDependencies += "com.typesafe.akka" %% "akka-stream" %-stream_${scala.binary.version}</artifactId> </dependency> </dependencies>
- Gradle
def versions = [ ScalaBinary: "2.13" ] dependencies { implementation platform("com.typesafe.akka:akka-bom_${versions.ScalaBinary}:2.6.20") implementation "com.typesafe.akka:akka-stream_${versions.ScalaBinary}" }
Introduction of these entities executes from.
- Non-Blocking
- Means that a certain operation does not hinder the progress of the calling thread, even if it takes a long time to finish the requested operation.
- Graph
- A description of a stream processing topology, defining the pathways through which elements shall flow when the stream is running.
- Operator
- The common name for all building blocks that build up a Graph. Examples of operators are
map(),
filter(), custom ones extending
GraphStages and graph junctions like
Mergeor
Broadcast. For the full list of built-in operators see the operator index
When we talk about asynchronous, non-blocking backpressure, we mean that the operators available in Akka Streams will not use blocking calls but asynchronous message passing to exchange messages between each other. This way they
- An operator with exactly one output, emitting data elements whenever downstream operators are ready to receive them.
- Sink
- An operator with exactly one input, requesting and accepting data elements, possibly slowing down the upstream producer of elements.
- Flow
- An operator which has exactly one input and output, which connects its upstream and downstream by transforming the data elements flowing through it.
- RunnableGraph
- A Flow that has both ends “attached” to a Source and Sink respectively, and is ready to be
run().
It is possible to attach a
Flow
Flow to a
Source
Source resulting in a composite source, and it is also possible to prepend a
Flow to a
Sink
Sink to get a new sink. After a stream is properly constructed by having both a source and a sink, it will be represented by the
RunnableGraph
RunnableGraph type, indicating that it is ready to be executed.
It is important to remember that even after constructing the
RunnableGraph by connecting all the source, sink and different operators, no data will flow through it until it is materialized. Materialization is the process of allocating all resources needed to run the computation described by a Graph (in Akka Streams this will often involve starting up Actors). Thanks to Flows being.
- Scala
source sink val sum: Future[Int] = runnable.run()
- Java
source
final Source<Integer, NotUsed> source = Source.from(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)); // note that the Future is scala.concurrent.Future final Sink<Integer, CompletionStage<Integer>> sink = Sink.fold(0, Integer::sum); // connect the Source to the Sink, obtaining a RunnableFlow final RunnableGraph<CompletionStage<Integer>> runnable = source.toMat(sink, Keep.right()); // materialize the flow final CompletionStage<Integer> sum = runnable.run(system);
After running (materializing) the
RunnableGraph[T] we get back the materialized value of type T. Every stream operator
Sink.fold).
After running (materializing) the
RunnableGraph we get a special container object, the
MaterializedMap. Both sources and sinks are able to put specific objects into this map. Whether they put something in or not is implementation dependent.
For example, a
Sink.fold will make a
CompletionStage available in this map).
- Scala
source
val source = Source(1 to 10) val sink = Sink.fold[Int, Int](0)(_ + _) // materialize the flow, getting the Sink's materialized value val sum: Future[Int] = source.runWith(sink)
- Java
source
final Source<Integer, NotUsed> source = Source.from(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)); final Sink<Integer, CompletionStage<Integer>> sink = Sink.fold(0, Integer::sum); // materialize the flow, getting the Sink's materialized value final CompletionStage<Integer> sum = source.runWith(sink, system);
It is worth pointing out that since operators are immutable, connecting them returns a new operator, instead of modifying the existing instance, so while constructing long flows, remember to assign the new value to a variable or run it:
- Scala
source
source
final Source<Integer, NotUsed> source = Source.from(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)); source.map(x -> 0); // has no effect on source, since it's immutable source.runWith(Sink.fold(0, Integer::sum), system); // 55 // returns new Source<Integer>, with `map()` appended final Source<Integer, NotUsed> zeroes = source.map(x -> 0); final Sink<Integer, CompletionStage<Integer>> fold = Sink.fold(0, Integer::sum); zeroes.runWith(fold, system); // 0
By default, Akka Streams elements support exactly one downstream operator. Making fan-out (supporting multiple downstream operators)
MaterializedMap returned is different for each such materialization, usually leading to different values being returned each time. In the example below, we create two running materialized instances of the stream that we described in the
runnable variable. Both materializations give us a different
Future
CompletionStage from the map even though we used the same
sink to refer to the future:
- Scala
source
// connect the Source to the Sink, obtaining a RunnableGraph val sink = Sink.fold[Int, Int](0)(_ + _) val runnable: RunnableGraph[Future[Int]] = Source(1 to 10).toMat(sink)(Keep.right) // get the materialized value of the sink val sum1: Future[Int] = runnable.run() val sum2: Future[Int] = runnable.run() // sum1 and sum2 are different Futures!
- Java
source
// connect the Source to the Sink, obtaining a RunnableGraph final Sink<Integer, CompletionStage<Integer>> sink = Sink.fold(0, Integer::sum); final RunnableGraph<CompletionStage<Integer>> runnable = Source.from(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)).toMat(sink, Keep.right()); // get the materialized value of the FoldSink final CompletionStage<Integer> sum1 = runnable.run(system); final CompletionStage<Integer> sum2 = runnable.run(system); // sum1 and sum2 are different Futures!
Defining sources, sinks and flows
The objects
Source
Source and
Sink
Sink define various ways to create sources and sinks of elements. The following examples show some of the most useful constructs (refer to the API documentation for more details):
- Scala
source
// Create a source from an Iterable Source(List(1, 2, 3)) // Create a source from a Future Source.future(_))
- Java
source
// Create a source from an Iterable List<Integer> list = new LinkedList<>(); list.add(1); list.add(2); list.add(3); Source.from(list); // Create a source form a Future Source.future(Futures.successful("Hello Streams!")); // Create a source from a single element Source.single("only one element"); // an empty source Source.empty(); // Sink that folds over the stream and returns a Future // of the final result in the MaterializedMap Sink.fold(0, Integer::sum); //
source
//, NotUsed] = Flow[Int].map(_ * 2).to(Sink.foreach(println(_))) Source(1 to 6).to(sink) // Broadcast to a sink inline val otherSink: Sink[Int, NotUsed] = Flow[Int].alsoTo(Sink.foreach(println(_))).to(Sink.ignore) Source(1 to 6).to(otherSink)
- Java
source
//. operators. It is possible however to add explicit buffer operators with overflow strategies that can influence the behavior
Source,
Flow
Flow (referred to as
Processor in Reactive Streams) and
Sink)
Request(int (
RunnableGraph
RunnableGraph)
Source
Source and
Flow
Flow elements as well as a small number of special syntactic sugars for running with well-known sinks, such as
runForeach(el => ...)
runForeach(el -> ...) (being an alias to
runWith(Sink.foreach(el => ...))
runWith(Sink.foreach(el -> ...))).
Materialization is performed synchronously on the materializing thread by an
ActorSystem
ActorSystem global
Materializer
Materializer. The actual stream processing is handled by actors started up during the streams materialization, which will be running on the thread pools they have been configured to run on - which defaults to the dispatcher set in the
ActorSystem config or provided as attributes on the stream that is getting materialized.
Reusing instances of linear computation operators (Source, Sink, Flow) inside composite Graphs is legal, yet will materialize that operator multiple times.
Operator Fusion
By default, Akka Streams will fuse the stream operators. This means that the processing steps of a flow or stream can be executed within the same Actor and has two consequences:
- passing elements from one operator to the next is a lot faster between fused operators due to avoiding the asynchronous messaging overhead
- fused stream operators do not run in parallel to each other, meaning that only up to one CPU core is used for each fused part
To allow for parallel processing you will have to insert asynchronous boundaries manually into your flows and operators by way of adding
Attributes.asyncBoundary
Attributes.asyncBoundary using the method
async on
Source
Source,
Sink
Sink and
Flow
Flow to operators that shall communicate with the downstream of the graph in an asynchronous fashion.
- Scala
source
Source(List(1, 2, 3)).map(_ + 1).async.map(_ * 2).to(Sink.ignore)
- Java
source operators that have been added since then.
Without fusing (i.e. up to version 2.0-M2) each stream operator operators. In those cases where buffering is needed in order to allow the stream to run at all, you will have to insert explicit buffers with the
.buffer() operator—typically a buffer of size 2 is enough to allow a feedback loop to function.
Combining materialized values
Since every operator in Akka Streams can provide a materialized value after being materialized, it is necessary to somehow express how these values should be composed to a final value when we plug these operators together. For this, many operator methods have variants that take an additional argument, a function, that will be used to combine the resulting values. Some examples of using these combiners are illustrated in the example below.
- Scala
source
// A source that can be signalled explicitly from the outside val source: Source[Int, Promise[Option[Int]]] = Source.maybe[Graph[Graph(source, flow, sink)((_, _, _)) { implicit builder => (src, f, dst) => import GraphDSL.Implicits._ src ~> f ~> dst ClosedShape })
- Java
source
//, system); CompletableFuture<Optional<Integer>> r5 = flow.to(sink).runWith(source, system); Pair<CompletableFuture<Optional<Integer>>, CompletionStage<Integer>> r6 = flow.runWith(source, sink, system); //. For details see Accessing the materialized value inside the Graph.
Source pre-materialization
There are situations in which you require a
Source
Source materialized value before the
Source gets hooked up to the rest of the graph. This is particularly useful in the case of “materialized value powered”
Sources, like
Source.queue,
Source.actorRef or
Source.maybe.
By using the
preMaterialize
preMaterialize operator on a
Source, you can obtain its materialized value and another
Source. The latter can be used to consume messages from the original
Source. Note that this can be materialized multiple times.
- Scala
source
val completeWithDone: PartialFunction[Any, CompletionStrategy] = { case Done => CompletionStrategy.immediately } val matValuePoweredSource = Source.actorRef[String]( completionMatcher = completeWithDone, failureMatcher = PartialFunction.empty, bufferSize = 100, overflowStrategy = OverflowStrategy.fail) val (actorRef, source) = matValuePoweredSource.preMaterialize() actorRef ! "Hello!" // pass source around for materialization source.runWith(Sink.foreach(println))
- Java
source
Source<String, ActorRef> matValuePoweredSource = Source.actorRef( elem -> { // complete stream immediately if we send it Done if (elem == Done.done()) return Optional.of(CompletionStrategy.immediately()); else return Optional.empty(); }, // never fail the stream because of a message elem -> Optional.empty(), 100, OverflowStrategy.fail()); Pair<ActorRef, Source<String, NotUsed>> actorRefSourcePair = matValuePoweredSource.preMaterialize(system); actorRefSourcePair.first().tell("Hello!", ActorRef.noSender()); // pass source around for materialization actorRefSourcePair.second().runWith(Sink.foreach(System.out::println), system);
Stream ordering
In Akka Streams almost all computation operators upheld by async operations such as
mapAsync
mapAsync, however an unordered version exists called
mapAsyncUnordered,
MergePrioritized or
GraphStage – which gives you full control over how the merge is performed.
Actor Materializer Lifecycle
The
Materializer
Materializer is a component that is responsible for turning the stream blueprint into a running stream and emitting the “materialized value”. An
ActorSystem
ActorSystem wide
Materializer is provided by the Akka
Extension
SystemMaterializer
SystemMaterializer by having an implicit
ActorSystem in scopepassing the
ActorSystem to the various
run methods this way there is no need to worry about the
Materializer unless there are special requirements.
The use case that may require a custom instance of
Materializer is when all streams materialized in an actor should be tied to the Actor lifecycle and stop if the Actor stops or crashes.
An important aspect of working with streams and actors is understanding a
Materializer’s life-cycle. The materializer is bound to the lifecycle of the
ActorRefFactory
ActorRefFactory it is created from, which in practice will be either an
ActorSystem
ActorSystem or
ActorContext
ActorContext (when the materializer is created within an
Actor
Actor).
Tying it to the
ActorSystem should be replaced with using the system materializer from Akka 2.6 and on.
When run by the system materializer the streams will run until the
ActorSystem is shut down. When the materializer is shut down before the streams have run to completion, they will be terminated abruptly. This is a little different than the usual way to terminate streams, which is by cancelling/completing them. The stream lifecycles are bound to the materializer like this to prevent leaks, and in normal operations you should not rely on the mechanism and rather use
KillSwitch
KillSwitch or normal completion signals to manage the lifecycles of your streams.
If we look at the following example, where we create the
Materializer within an
Actor:
- Scala
source
final class RunWithMyself extends Actor { implicit val mat: Materializer = Materializer(context) Source.maybe.runWith(Sink.onComplete { case Success(done) => println(s"Completed: $done") case Failure(ex) => println(s"Failed: ${ex.getMessage}") }) def receive = { case "boom" => context.stop(self) // will also terminate the stream } }
- Java
source
final class RunWithMyself extends AbstractActor { Materializer mat = Materializer.createMaterializer(context()); @Override public void preStart() throws Exception { Source.repeat("hello") .runWith( Sink.onComplete( tryDone -> { System.out.println("Terminated stream: " + tryDone); }), mat); } @Override public Receive createReceive() { return receiveBuilder() .match( String.class, p -> { // this WILL terminate the above stream as well context().stop(self()); }) .build(); } }
In the above example we used the
ActorContext
ActorContext to create the materializer. This binds its lifecycle to the surrounding
Actor
Actor. In other words, while the stream we started there would under normal circumstances run forever, if we stop the Actor it would terminate the stream as well. We have bound the stream’s lifecycle to the surrounding actor’s lifecycle. This is a very useful technique if the stream is closely related to the actor, e.g. when the actor represents a user or other entity, that we continuously query using the created stream – and it would not make sense to keep the stream alive when the actor has terminated already. The streams termination will be signalled by an “Abrupt termination exception” signaled by the stream.
You may also cause a
Materializer to shut down by explicitly calling
shutdown()
shutdown() on it, resulting in abruptly terminating all of the streams it has been running then.
Sometimes, however, you may want to explicitly create a stream that will out-last the actor’s life. For example, you are using an Akka stream to push some large stream of data to an external service. You may want to eagerly stop the Actor since it has performed all of its duties already:
- Scala
source
final class RunForever(implicit val mat: Materializer) extends Actor { Source.maybe.runWith(Sink.onComplete { case Success(done) => println(s"Completed: $done") case Failure(ex) => println(s"Failed: ${ex.getMessage}") }) def receive = { case "boom" => context.stop(self) // will NOT terminate the stream (it's bound to the system!) } }
- Java
source
final class RunForever extends AbstractActor { private final Materializer materializer; public RunForever(Materializer materializer) { this.materializer = materializer; } @Override public void preStart() throws Exception { Source.repeat("hello") .runWith( Sink.onComplete( tryDone -> { System.out.println("Terminated stream: " + tryDone); }), materializer); } @Override public Receive createReceive() { return receiveBuilder() .match( String.class, p -> { // will NOT terminate the stream (it's bound to the system!) context().stop(self()); }) .build(); }
In the above example we pass in a materializer to the Actor, which results in binding its lifecycle to the entire
ActorSystem
ActorSystem rather than the single enclosing actor. This can be useful if you want to share a materializer or group streams into specific materializers, for example because of the materializer’s settings etc.
Do not create new actor materializers inside actors by passing the
context.system
context.system() to it. This will cause a new
Materializer
Materializer to be created and potentially leaked (unless you shut it down explicitly) for each such actor. It is instead recommended to either pass-in the Materializer or create one using the actor’s
context.
|
https://doc.akka.io/docs/akka/current/stream/stream-flows-and-basics.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
std::make_shared, std::make_shared_for_overwrite
Tand wraps it in a std::shared_ptr using
argsas the parameter list for the constructor of
T. The object is constructed as if by the expression ::new (pv) T(std::forward<Args>(args)...), where
pvis an internal
void*pointer to storage suitable to hold an object of type
T. The storage is typically larger than
sizeof(T)in order to use one allocation for both the control block of the shared pointer and the
Tobject. The
std::shared_ptrconstructor called by this function enables
shared_from_thiswith a pointer to the newly constructed object of type
T.
std::remove_all_extents_t<T>are value-initialized as if by placement-new expression ::new(pv) std::remove_all_extents_t<T>(). The overload (2) creates an array of size
Nalong the first dimension. The array elements are initialized in ascending order of their addresses, and when their lifetime ends are destroyed in the reverse order of their original construction.
u. If
Uis not an array type, then this is performed as if by the same placement-new expression as in (1); otherwise, this is performed as if by initializing every non-array element of the (possibly multidimensional) array with the corresponding element from
uwith the same placement-new expression as in (1). The overload (4) creates an array of size
Nalong the first dimension. The array elements are initialized in ascending order of their addresses, and when their lifetime ends are destroyed in the reverse order of their original construction.
Tis not an array type and (3) if
Tis
U[N], except that the created object is default-initialized.
In each case, the object (or individual elements if
T is an array type) (since C++20) will be destroyed by p->~X(), where
p is a pointer to the object and
X is its type.
[edit] Parameters
[edit] Return value
std::shared_ptr of an instance of type
T.
[edit] Exceptions
May throw std::bad_alloc or any exception thrown by the constructor of
T. If an exception is thrown, the functions have no effect. If an exception is thrown during the construction of the array, already-initialized elements are destroyed in reverse order. (since C++20)
[edit] Notes
This function may be used as an alternative to std::shared_ptr<T>(new T(args...)). The trade-offs are:
- std::shared_ptr<T>(new T(args...)) performs at least two allocations (one for the object
Tand one for the control block of the shared pointer), while std::make_shared<T> typically performs only one allocation (the standard recommends, but does not require this; all known implementations do this)
- If any std::weak_ptr references the control block created by
std::make_sharedafter the lifetime of all shared owners ended, the memory occupied by
Tpersists until all weak owners get destroyed as well, which may be undesirable if
sizeof(T)is large.
- std::shared_ptr<T>(new T(args...)) may call a non-public constructor of
Tif executed in context where it is accessible, while
std::make_sharedrequires public access to the selected constructor.
- Unlike the std::shared_ptr constructors,
std::make_shareddoes not allow a custom deleter.
std::make_shareduses ::new, so if any special behavior has been set up using a class-specific operator new, it will differ from std::shared_ptr<T>(new T(args...)).
A constructor enables
shared_from_this with a pointer
ptr of type
U* means that it determines if
U has an unambiguous and accessible (since C++17)::enable_shared_from_this. The assignment to the
weak_this member is not atomic and conflicts with any potentially concurrent access to the same object. This ensures that future calls to shared_from_this() would share ownership with the std::shared_ptr created by this raw pointer constructor.
The test ptr->weak_this.expired() in the exposition code above makes sure that
weak_this is not reassigned if it already indicates an owner. This test is required as of C++17.
[edit] Example
#include <memory> #include <vector> #include <iostream> #include <type_traits> struct C { // constructors needed (until C++20) C(int i) : i(i) {} C(int i, float f) : i(i), f(f) {} int i; float f{}; }; int main() { // using `auto` for the type of `sp1` auto sp1 = std::make_shared<C>(1); // overload (1) static_assert(std::is_same_v<decltype(sp1), std::shared_ptr<C>>); std::cout << "sp1->{ i:" << sp1->i << ", f:" << sp1->f << " }\n"; // being explicit with the type of `sp2` std::shared_ptr<C> sp2 = std::make_shared<C>(2, 3.0f); // overload (1) static_assert(std::is_same_v<decltype(sp2), std::shared_ptr<C>>); static_assert(std::is_same_v<decltype(sp1), decltype(sp2)>); std::cout << "sp2->{ i:" << sp2->i << ", f:" << sp2->f << " }\n"; // shared_ptr to a value-initialized float[64]; overload (2): std::shared_ptr<float[]> sp3 = std::make_shared<float[]>(64); // shared_ptr to a value-initialized long[5][3][4]; overload (2): std::shared_ptr<long[][3][4]> sp4 = std::make_shared<long[][3][4]>(5); // shared_ptr to a value-initialized short[128]; overload (3): std::shared_ptr<short[128]> sp5 = std::make_shared<short[128]>(); // shared_ptr to a value-initialized int[7][6][5]; overload (3): std::shared_ptr<int[7][6][5]> sp6 = std::make_shared<int[7][6][5]>(); // shared_ptr to a double[256], where each element is 2.0; overload (4): std::shared_ptr<double[]> sp7 = std::make_shared<double[]>(256, 2.0); // shared_ptr to a double[7][2], where each double[2] element is {3.0, 4.0}; overload (4): std::shared_ptr<double[][2]> sp8 = std::make_shared<double[][2]>(7, {3.0, 4.0}); // shared_ptr to a vector<int>[4], where each vector has contents {5, 6}; overload (4): std::shared_ptr<std::vector<int>[]> sp9 = std::make_shared<std::vector<int>[]>(4, {5, 6}); // shared_ptr to a float[512], where each element is 1.0; overload (5): std::shared_ptr<float[512]> spA = std::make_shared<float[512]>(1.0); // shared_ptr to a double[6][2], where each double[2] element is {1.0, 2.0}; overload (5): std::shared_ptr<double[6][2]> spB = std::make_shared<double[6][2]>({1.0, 2.0}); // shared_ptr to a vector<int>[4], where each vector has contents {5, 6}; overload (5): std::shared_ptr<std::vector<int>[4]> spC = std::make_shared<std::vector<int>[4]>({5, 6}); }
Output:
sp1->{ i:1, f:0 } sp2->{ i:2, f:3 }
|
https://en.cppreference.com/w/cpp/memory/shared_ptr/make_shared
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Barcode Software
reportviewer barcode font
Risk Analysis in Software
Integrate barcode 39 in Software Risk Analysis
Downloaded from Digital Engineering Library @ McGraw-Hill () Copyright 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.
using barcode generator for .net windows forms control to generate, create bar code image in .net windows forms applications. recognise
BusinessRefinery.com/ bar code
asp.net barcode label printing
using logic asp.net web pages to display bar code with asp.net web,windows application
BusinessRefinery.com/ barcodes
1.5M DS-1
how to use barcode reader in asp.net c#
Using Barcode decoder for right .net framework Control to read, scan read, scan image in .net framework applications.
BusinessRefinery.com/ bar code
create barcode generator c#
use .net barcode drawer to attach barcode on c# trial
BusinessRefinery.com/ bar code
Static Routes
using barcode creation for visual studio .net crystal report control to generate, create barcode image in visual studio .net crystal report applications. dlls
BusinessRefinery.com/ barcodes
birt barcode plugin
use birt reports barcode development to assign barcodes on java input
BusinessRefinery.com/ bar code
CorelDRAW X4: The Official Guide
to print qr code iso/iec18004 and qr code iso/iec18004 data, size, image with .net barcode sdk compatible
BusinessRefinery.com/QR-Code
using barcode implement for excel spreadsheets control to generate, create qr codes image in excel spreadsheets applications. packages
BusinessRefinery.com/qr barcode
For a given total rise h for angle q0 and any two of the b angles given one can solve for all angles and all values of the derivative curves.
to deploy qrcode and qr code jis x 0510 data, size, image with excel barcode sdk files
BusinessRefinery.com/Denso QR Bar Code
qr-codes data correct in .net
BusinessRefinery.com/qrcode
Securing the System
crystal reports qr code
use .net vs 2010 qr-code integrating to generate qr-codes in .net formula
BusinessRefinery.com/qr bidimensional barcode
to display qr codes and qr data, size, image with vb barcode sdk button
BusinessRefinery.com/Quick Response Code
Temperature Conversions
using barcode integrating for office excel control to generate, create pdf417 image in office excel applications. preview
BusinessRefinery.com/PDF417
ssrs code 39
using image sql server to add barcode 39 for asp.net web,windows application
BusinessRefinery.com/Code39
BellCore developed transaction protocols to handle the call processing, using an 1129protocol specification. The 1129 transaction is triggered when a call comes in. It is initiated with the delivery of a message from the IP to the SCP . Thereafter, the transaction continues with queries issued by the SCP and synchronous responses to these queries returned by the IP, reporting results of the requested action. The BellCore recommendations call for multiple intelligent peripherals within the network. Each is capable of working with multiple independent service control points via communication across multiple data paths. Each IP operates as though it is distributed across multiple host platforms interconnected by multiple LANs. Introducing IP into an AIN environment is a major expense, requiring significant investment in hardware, networking, and supporting software. The BellCore philosophy is to provide redundant components and data paths eliminating single points of failure wherever possible. However, many situations exist whereby an IP or SN provides a service, yet the service does not warrant redundant infrastructure. Therefore, a solution is required for the IP or SN to provide suitable reliability inherently.
.net code 128 reader
Using Barcode recognizer for custom .net framework Control to read, scan read, scan image in .net framework applications.
BusinessRefinery.com/code 128b
crystal reports data matrix
use .net crystal report datamatrix 2d barcode encoder to draw datamatrix in .net jpg
BusinessRefinery.com/Data Matrix 2d barcode
Test case selection. The last step before testing can begin is test case selection. The operator must tell the ETS which cases should be run during the current test campaign. Some test cases might not be selectable, such as those where the IUT has not implemented an optional feature. Among those test cases that are selectable, the operator might opt to run only those that concentrate on particular aspects of the IUT. Figure 6.8 shows a Test Suite Browser screen for the selection of test cases. This notion of selectability is clearly defined in conformance testing. For each test case, there is a boolean expression (called a test case selection expression) that must produce the value True before the test case can be selected. Each selection expression is based on one or more test suite parameters. Consider the test case selection example shown in Figure 6.9. As you can see, test case Ver_VC_Only_F4 can be selected only if the IUT supports VC service but not VP service. In addition, most ETS packages allow the operator to disregard the test case selection expressions and to select any test case desired (such as for testing abnormal situations). Running the test. As soon as one or more test cases have been selected, the test suite can be run. During execution, the tester sends PDUs to the IUT, analyzes its reaction (the contents of the cells sent by the IUT, time when these PDUs are sent, etc.), compares the expected and the observed behavior of the IUT, assigns a verdict to the test case, displays and records on disk all protocol data exchanged, and proceeds to the next test case. This type of testing is called stimulus-and-response testing because the tester expects a response from the IUT each time a stimulus is sent.
ssrs fixed data matrix
using barcode creation for sql database control to generate, create gs1 datamatrix barcode image in sql database applications. rotation
BusinessRefinery.com/barcode data matrix
using barcode implement for microsoft excel control to generate, create 3 of 9 image in microsoft excel applications. activity
BusinessRefinery.com/bar code 39
A network includes all of the hardware and software components used to
generate code 39 barcode in c#
generate, create 3 of 9 barcode classes none for .net c# projects
BusinessRefinery.com/Code-39
using byte excel microsoft to encode ecc200 with asp.net web,windows application
BusinessRefinery.com/Data Matrix ECC200
HISTORY
1. Simplify these logarithmic expressions. a3 b 2 (a) ln 5 c d log2 (a3 b) (b) log3 (ab2 ) (c) ln[e 2x z3 w 2 ] (d) log10 [1000w 100] 2. Solve each of these equations for x. (a) 2x 3 x = 2x e 2 2x (b) x 2x = 10x 10 3 5 (c) 22x 33x 44x = 6 5 2 (d) 2x 3x = x x 3 2 3 e 3. Calculate each of these derivatives. d (a) ln[cos(x 2 )] dx x3 d ln (b) dx x 1 d cos(e x ) (c) e dx d cos(ln x) (d) dx 4. Calculate each of these integrals. (a) (b) (c)
Standard ACLs can filter only on the source IP address. If you omit the
using namespace std;
The C# Language
Station 6 Station 5
This program introduces several new concepts. First, the statement
Figure 10-1. An application with multiple connections
Standards
More Code 39 on None
how to generate barcode in rdlc report: Threat Forecasting Data Is Sparse in Software Access ANSI/AIM Code 39 in Software Threat Forecasting Data Is Sparse
how to use barcode in rdlc report: Risk Transfer in Software Access 39 barcode in Software Risk Transfer
how to use barcode in rdlc report: Transfers and Reassignments in Software Render USS Code 39 in Software Transfers and Reassignments
how to use barcode in rdlc report: 2: IT Governance and Risk Management in Software Insert 3 of 9 barcode in Software 2: IT Governance and Risk Management
how to use barcode in rdlc report: ISO 20000 in Software Generator barcode 3 of 9 in Software ISO 20000
rdlc barcode font: 2: IT Governance and Risk Management in Software Drawer Code 39 Full ASCII in Software 2: IT Governance and Risk Management
word 2010 code 39 barcode: Reviewing Documentation and Records in Software Encoding 3 of 9 barcode in Software Reviewing Documentation and Records
rdlc barcode font: Notes in Software Draw barcode code39 in Software Notes
rdlc barcode font: Resource Planning in Software Receive 3 of 9 barcode in Software Resource Planning
rdlc barcode font: CISA Certified Information Systems Auditor All-in-One Exam Guide in Software Develop 39 barcode in Software CISA Certified Information Systems Auditor All-in-One Exam Guide
rdlc barcode font: G14, Application Systems Review in Software Assign ANSI/AIM Code 39 in Software G14, Application Systems Review
rdlc barcode font: P6, Firewalls in Software Writer 3 of 9 in Software P6, Firewalls
rdlc barcode font: 3: The Audit Process in Software Implement Code-39 in Software 3: The Audit Process
how to print barcode in rdlc report: 3: The Audit Process in Software Develop Code 3 of 9 in Software 3: The Audit Process
how to print barcode in rdlc report: 3: The Audit Process in Software Produce 3 of 9 barcode in Software 3: The Audit Process
how to print barcode in rdlc report: Audit Risk and Materiality in Software Integration Code-39 in Software Audit Risk and Materiality
how to print barcode in rdlc report: 3: The Audit Process in Software Attach Code 3 of 9 in Software 3: The Audit Process
how to print barcode in rdlc report: What s in a Title in Software Access 3 of 9 in Software What s in a Title
how to print barcode in rdlc report: Project Roles and Responsibilities in Software Get Code 39 in Software Project Roles and Responsibilities
how to print barcode in rdlc report: CISA Certified Information Systems Auditor All-in-One Exam Guide in Software Render ANSI/AIM Code 39 in Software CISA Certified Information Systems Auditor All-in-One Exam Guide
Articles you may be interested
.net barcode sdk open source: C H A P T E R in Software Deploy barcode code 128 in Software C H A P T E R
progress bar code in vb.net 2008: This program produces the following output. in Java Render PDF-417 2d barcode in Java This program produces the following output.
creating qrcodes in excel: Final Exam in Software Attach QR-Code in Software Final Exam
ssrs export to pdf barcode font: How Arguments Are Passed in .net C# Development QR-Code in .net C# How Arguments Are Passed
code 128 java free: Number of BDAV/BDMV elements in Software Creation qr codes in Software Number of BDAV/BDMV elements
create qr codes from excel file: CAM DESIGN HANDBOOK in Software Implement DataMatrix in Software CAM DESIGN HANDBOOK
how to create barcode in vb net 2008: Skills and Careers in the Game Industry in Software Generating code 128 barcode in Software Skills and Careers in the Game Industry
barcode recognition vb.net: PART I PART I PART I in Software Encoding QR-Code in Software PART I PART I PART I
print barcode rdlc report: Requiring Address Translation in Software Generating Data Matrix barcode in Software Requiring Address Translation
generate bar code in vb.net: Protocol Analysis 548 Basic Telecommunications Technologies in Software Writer USS Code 128 in Software Protocol Analysis 548 Basic Telecommunications Technologies
barcode font vb.net: in .NET Use data matrix barcodes in .NET
barcode label printing in vb.net: What is the prevalence of stress incontinence in Android Drawer QRCode in Android What is the prevalence of stress incontinence
create barcodes in vb.net: Installing the Universal Printer in Software Integrate QR-Code in Software Installing the Universal Printer
print barcode labels in vb.net: Fiber Optic Network Elements Fiber Optic Network Elements 485 in Software Integration Code 128 in Software Fiber Optic Network Elements Fiber Optic Network Elements 485
generate barcode image vb.net: Manufacturing test elements in Software Connect Code-128 in Software Manufacturing test elements
make code 39 barcodes excel: CHANGING THE SETTINGS in Software Implementation QR Code JIS X 0510 in Software CHANGING THE SETTINGS
java barcode api open source: Build Your Own Combat Robot in Software Develop EAN-13 in Software Build Your Own Combat Robot
rdlc barcode c#: Tunnel Limits in Software Deploy Data Matrix 2d barcode in Software Tunnel Limits
how to create barcodes in visual basic .net: Citrix XenApp Platinum Edition for Windows: The Official Guide in Software Creator Quick Response Code in Software Citrix XenApp Platinum Edition for Windows: The Official Guide
free download barcode scanner for java mobile: Not Better Enough in Software Integration QR Code JIS X 0510 in Software Not Better Enough
|
http://www.businessrefinery.com/yc3/476/28/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
In this series, we'll build a web application from scratch with Laravel—a simple and elegant PHP web framework.
First up, we'll learn more about Laravel and why it's such a great choice for your next PHP-based web application. web application framework that describes itself.
- Elegant—most of Laravel's functions work seamlessly with very little configuration, relying on industry-standard conventions to lessen code bloat.
- Well-documented—Laravel's documentation is complete and always up-to-date. The framework's creator makes a point of updating the documentation before releasing a new version, ensuring that people who are learning the framework always have the latest documentation.
What Makes Laravel Different?
As with any PHP framework, Laravel boasts a multitude of functions that differentiate it from the rest of the pack. Here are some, which I feel are the most important (based on the Laravel documentation).
Packages
Packages are to Laravel as PEAR is to PHP; they are add-on code that you can download and plug into your Laravel installation. Laravel comes with a command-line tool called Artisan, which makes it incredibly easy to install bundles.
We all know the importance of web security in this modern digital era. One of my favorite Laravel Packages, called Spatie, adds a useful tool to Laravel that lets you define roles and permissions in your application. In essence, it lets you specify which users have access to what resources.
Other very useful Laravel packages include Laravel Mix (a webpack front-end build tool), Eloquent-Sluggable (for making slugs), and Laravel Debugbar (for debugging).
Eloquent ORM
The Eloquent ORM is the most advanced PHP
ActiveRecordimplementation members' that creates a
users table in a database, taken from the Laravel documentation:
Schema::table('users', function($table) { $table->create(); $table->increments('id'); $table->string('username'); $table->string('email'); $table->string('phone')->nullable(); $table->text('about'); $table->timestamps(); });
Migrations are defined inside a migration PHP file. These files go inside the app/database/migration folder in a Laravel project.
Typically, a migration file follows the naming convention: 2022_05_23_000000_create_users_table.php, where 2022_05_23 is the date and create_users_table is the type of migration.
Running the migration above will create a users table consisting of the specified columns inside your chosen database. Each column is assigned a type using the type methods (see a complete list in the official docs). To create a database migration, use the
artisan utility as follows:
php artisan migrate
You can also perform more granular tasks on your table like adding or removing a column, reordering the columns, seeding data, and so on.
Here's an example that adds the location column to the existing users table:
Schema::table('users', function (Blueprint $table) { // Add location after the id column in the table $table->string('location')->after('id'); });
The file for this migration would follow almost the same naming convention discussed earlier. The only difference would be in the migration name, which would go like: add_location_column_to_users_table.
Seeding
When developing an application, it's important that you test the functionality to see if the app is working as intended. Seeding allows you to populate your tables with fake data en masse, for testing purposes.
All seeders inside a project go in the app/database/seeders directory. To generate a seeder, use the
artisan utility as follows:
php artisan make:seeder UserSeeder
Here's a basic example that populates the users table with some autogenerated data, from the docs:
DB::table('users')->insert([ 'name' => Str::random(10), 'email' => Str::random(10).'@gmail.com', 'password' => Hash::make('password'), ]);
You can use model factories to generate large amounts of database records at a time. This basic example creates 50 posts for a user inside this database:
Post::factory() ->count(50) ->belongsToUser(1) ->create();
Unit-Testing
TestCase class, like so:
class MyUnitTest extends TestCase { public function somethingShouldBeTrue() { $this->assertTrue(true); } }
To run your Laravel application's tests, let's, again, use the
artisan command-line utility:
php artisan test
That command will run all the tests that are found within the app/tests/unit directory of your Laravel application.
Let's Build a To-Do Application With Laravel
Now that we know more about Laravel, it's time to get hands-on by building a web application with it from scratch. We'll build a to-do application that allows us to post items to the database. I'll also include the functionality to mark them once completed. When done, it will look like this:
In the process, we'll learn about the different concepts and features of Laravel like routing and controllers, as well as MySQL and, of course, the Blade templating language.
With that in mind, let's dive in!
Installing Laravel
composer create-project laravel/laravel my-app
After the application has been created, cd into my-app and start your development server:
cd my-app php artisan serve
Now we're done setting up the environment. After some time, you should be able to access your Laravel development server at.
Setting Up the Application
At this point, Laravel has scaffolded a working framework for you. You'll find a host of folders and files inside your root project folder. We won't go over all of these folders—instead, we'll focus on the most important ones.
First, you have the app folder. This folder contains the Models and Controllers, amongst others. Next you have the database folder, which contains the migrations, factories, and seeders. resources is where all the front-end code will go, and specifically in here we find the view folder, where we'll create the view of our application using Blade templates. Finally, routes will contain all of our application routes. Inside routes/web.php, we'll define the web routes of our application.
Now that we have a basic understanding of the framework, let's create and serve our first page. By default, Laravel sets up a starter template at resources/views/welcome.blade.php, which is what we see at. This is a Blade template, not HTML. Whenever we modify that file, the changes are reflected in the browser. You can modify yours and see the changes in effect.
First, we'll bring Bootstrap CSS into our project by embedding its CDN link in the
<head> tag of our welcome.blade.php file:
<head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Laravel</title> <!-- Bootstrap CDN --> "> </head>
Then we'll create a basic Bootstrap CSS form for our to-do layout:
<form method="post">
Hooking Up the Database (MySQL)
MySQL is a popular relational database, similar to Microsoft SQL Server and Oracle. It's used by many web applications to create relational datastores where the records relate to one another.
You can get MySQL via popular distributions like WAMPServer and XAMPP. Create a new database in your local MySQL installation and name it todo. In the following section, we'll create and migrate a table comprising some columns to this database.
Once you have your MySQL server set up, simply open the .env file in your project root folder and pass your database name to DB_DATABASE, like so:
DB_DATABASE=todo
Creating a Table and Adding Columns (Models and Migrations)
Earlier we learnt that migrations are simply a way to execute changes to our database tables. They allow us to create tables and add or remove columns right from our application. But in relation to databases, there is another important concept we must learn about, and that is Models.
Models allow you to retrieve, insert, and update information in your data table. Typically, each of the tables in a database should have a corresponding “Model” that allows us to interact with that table.
Now we'll create our first model, which is
ListItem. To do so, we run the following artisan command:
php artisan make:model ListItem -m
The
-m flag creates a migration file along with the model, and this migration file will go inside the app/database/migrations folder.
Next, we'll modify the migration file and add the columns we want in our list_items table:
<?php use Illuminate\Database\Migrations\Migration; use Illuminate\Database\Schema\Blueprint; use Illuminate\Support\Facades\Schema; class CreateListItemsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('list_items', function (Blueprint $table) { $table->id(); $table->string('name'); $table->integer('is_complete'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('list_items'); } }
Here we are adding the fields id, name, and is_complete.
table->timestamps() will automatically generate two fields in our database: created_at and updated_at.
Run the following to migrate the table and its columns to your database:
php artisan migrate
Setting Up Routes
In Laravel, all requests to the application are mapped to specific functions or controllers by
Routes. They are responsible for instructing the application where URLs go. For example, if we wanted to render the home view file, we could create the following route within web.php, found inside the routes folder:
Route::get('/home', function() { return view('home'); })
Alternatively, if we instead needed to route to a
Controller, say, the HomeController.php controller, we might do something like this:
Route::controller('HomeController');
This would route to the HomeController.php controller file. Any action method there will be made available as well. To take things further, you can specify a particular method to handle that route:
Route.get([HomeController::class, 'index'])
Your HomeController.php file will basically look like this:
<?php namespace App\Http\Controllers; class HomeController extends Controller { public function index() { } }
Don't worry, we'll learn more about controllers soon. Basically, what we did is specify the
index method inside the
HomeController class to be called when a user navigates to localhost:8000/home.
In addition, a route name,
home.index, is assigned to the route. This is especially useful if you're using a templating language like Blade (as we'll see soon). Sometimes, two routes may share the same path (e.g. /thread), but one is GET and the other is POST. In such cases, you can distinguish both by using different route names, preventing mix-ups.
An important thing to note here is that by default, Laravel does not route to the controllers as other PHP frameworks do. This is by design. By doing so, we can actually create simple pages without the need to create a controller for it. For example, if we wanted to create a static Contact Us page that just lists contact information, we can simply do something like this:
Route::any('contact-us', function() { return view('contact-us'); })
This will route and render the resources/views/contact-us.blade)
- Middleware—this lets us run some functionality before or after a route is executed, depending on the route that was called. For example, we can create auth middleware that will be called before all routes, except the home and about routes.
For the purposes of this web application, though, we only need two routes: the first route just shows the welcome page along with the to-do list, and the second route will be called when we submit a new to-do item using the form.
Open up the web.php file and add the following routes:
Route::get('/', [TodoListController::class, 'index']); Route::post('/saveItem', [TodoListController::class, 'saveItem'])->name('saveItem');
For the second, we set the method to post to indicate that we are handling a POST request. When this route is triggered, we want to call the
saveItem method inside the
TodoListController (we'll create it soon). A route name is also set. We'll use this name when making a POST request from the form.
Next, we import the controller on top of the file:
use App\Http\Controllers\TodoListController;
Time to learn about controllers and create our very own TodoListController.
Creating Controllers
Typically, you define all of the logic for handling a route inside a controller. Controllers in Laravel are found inside the
app/Http/Controllers folder.
To create a controller, we use the artisan utility. Remember that
TodoListController whose method I said was going to handle to POST request to our
saveItem route? It's time to create it.
Run the following command on your terminal:
php artisan make:controller TodoListController
This will create a TodoListController.php file inside the
app/Http/Controllers folder. By convention, we'll want to name the file something descriptive that will also be the name of the controller class, which is why I went with that name.
In this controller class, we'll define two methods:
index and
saveItem.
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Models\ListItem class TodoListController extends Controller { public function index() { return view('welcome', ['listItems' => ListItem::all()]); } public function saveItem(Request $request) { $item = new ListItem; $item->name = request->name; $item->is_complete = 0; $item->save(); return view('welcome'); } }
First, we import two Models:
Request and
ListItem. Request gives us all the information about the HTTP request. ListItem is the model we created earlier for saving a to-do list item in our database.
The index method returns the welcome view and passes all the list items from the database. That way, we can display them on the page if there is any.
The
saveItem method is for saving a to-do item to our database. Here we create a new
ListItem instance. We then set its
name to the name from the request payload, and set
is_complete to 0 (represents
false or "no"). Finally, we return the welcome page.
For this to work, all we have to do now is modify our welcome.blade.php file, which is where we'll be making a request from and showing the to-do list.
For now, let's learn more about controllers.
More Controller Fun
Middleware
There's a lot more that we can do with controllers, rather than them just being gateways to the view files. For example, remember the Middleware feature that I mentioned earlier in the routes section? Aside from being attached to specific Routes, we can also attach them to specific controllers! Simply create a
__constructor method for the controller, and set up the middleware there. For example, if we need to ensure that a user is authenticated for all the methods in a controller, we can make use of our example
auth middleware:
public function __construct() { $this->middleware('auth'); }
This will call the auth middleware on all actions in this controller. If we want to target some specific actions, we can refer to the only method, like so:
public function __construct() { $this->middleware('auth')->only('index'); // Or for only index and store actions $this->middleware('auth')->only(array('index, 'store')); }
We can alternatively use the
except method to implement the middleware on all actions, except a few:
public function __construct() { $this->middleware('auth')->except('store'); }
Notice how expressive this code is?
We can even target a specific HTTP verb:
public function __construct() { $this->middleware('auth')->except('store')->on('post'); }
The Base
Controller
Most, if not all, controllers extend the
Controller. This gives us a way to define methods that will be the same for all our controllers. For example, if we need to create a logging method to log any controller request:
class Controller extends BaseController {. If you want to learn more, this section of Laravel's documentation provides detailed information about controllers.
Now let's hook up our view to our already-implemented route and controller.
Creating the need to have a
.blade.phpextension. This tells Laravel to use the engine on the view file.
Now, back in our welcome.blade.php file, we do two things:
- We use forelse to loop through our to-do list and show them. If there is no item (i.e. we haven't saved any item yet), we instead show No Items Saved Yet.
- We set the route to POST to. We'll make our form work by adding a CSRF token to it. This is a security measure you take whenever you're posting data with forms.
Replace the contents of
div.container with the following markup:
@forelse ($listItems as $listItem) <div class="alert alert-primary" role="alert"> <span>Item: {{ $listItem->name }}</span> <form method="post" action="{{ route('markAsComplete', $listItem->id) }}"> {{ csrf_field() }} <button type="submit" class="btn {{ $listItem->is_complete ? 'btn-success' : 'btn-danger' }}" > {{ $listItem->is_complete ? 'Completed' : 'Mark as Complete' }} </button> </form> </div> @empty <div class="alert alert-danger" role="alert"> No Items Saved Yet </div> @endforelse <form method="post" action="{{ route('saveItem') }}"> {{ csrf>
Notice how we used the
is_complete value of each list item to decide the color of the button and text content. We'll come to how we change it very soon.
Also, we used the
route() helper function provided by Blade to define a form submit action; we simply route to
saveItem (that we created earlier) when this form is submitted.
To test this out, navigate your browser to localhost:3000 to see the welcome page. Now type a to-do item in the text input and submit. The new item will be shown on the page.
Also, go to your MySQL database and check the list_items table to confirm. While you're there, you'll notice that all the items have their
is_complete column set to 0. One last thing we'll do is add the functionality for marking a to-do list as Completed.
If you recall, we already added the button for marking a to-do item as completed in the view. Now we need to define the route. Go to your web.php and add the following:
Route::post('/markAsComplete/{id}', [TodoListController::class, 'markItem'])->name('markAsComplete');
Because we need to know which list item is being marked, we pass its
id to the
markItem method in the controller since all items had an
id column and we passed the id to
route() inside the view.
Next, we define the method in the controller:
public function markItem($id) { $item = ListItem::find($id); $item->is_complete = 1; $item->save(); return redirect('/'); }
Now refresh your page at and mark any item. The button colour and text content will change.
Now that we have our main layout, let's see how to include other sections inside it.
@section and
@yield
Sections let us inject content into the main layout from within a view. To define which part of our main layout is a section, we surround it with
@section and
@yield_section Blade tags.
Supposing we want to create a navigation section and inject it into our main layout, welcome.blade.php. We create a new blade file and name it nav.blade.php in. In this file, we create a section called Navigation and define the markup for this section:
@section('navigation') <li><a href="about">About</a></li> <li><a href="policy">Policy</a></li> <li><a href="app">Mobile App</a></li> @endsection
At this point, the navigation simply exists. To actually inject it inside the main layout, we use
@yield.
@yield
The
@yield function is used to bring a section into a layout file. To yield a section, we need to pass the correct path to that file inside the
@yield function. To define the path, the starting point is the views folder.
For example, assuming that we created nav.blade.php inside the views folder, this is how we bring it into welcome.blade.php:
<div class="container"> <div class="navigation"> @yield('nav') </div> <!-- Other Markups --> </div>
Importing Assets With
@section and
@yield
With
@section and
@yield, you can also load CSS and JS files to a page. This makes it possible to organize your application's layout in the best possible manner.
To demonstrate, I'll create a third blade file and name it styles.blade.php. Inside this file, I'll bring in the stylesheet used in my app:
@section('navigation') "> <style> form { display: flex; align-items: center; justify-content: center; } .container { margin-top: 8rem; } .alert { text-align: center; display: flex; align-items: center; justify-content: space-between; } </style> @endsection
Instead of writing the styles directly in welcome.blade.php, I can simply do this instead:
<head> <!-- Styles --> @yield('styles') </head>
There is so much more to learn about Blade. If you're interested in exploring further, be sure to check out the documentation.
Conclusion
After reading this tutorial, you've learned:
- What Laravel is and how it's different from other PHP frameworks.
- Where to download Laravel and how to set it up.
- How Laravel's Routing system works.
- How to create your first Laravel Controller.
- How to create your first Laravel View.
- How to use Laravel's Blade Templating Engine.
Laravel is truly an amazing framework. It's fast, simple, elegant, and so easy to use. It absolutely merits being considered as the framework to use for your next project.
The Laravel category on Envato Market is growing fast, too. There's a wide selection of useful scripts to help you with your projects, whether you want to build an image-sharing community, add avatar functionality, or much more.
|
https://code.tutsplus.com/tutorials/building-web-applications-from-scratch-with-laravel--net-25517?ec_unit=translation-info-language
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Security
Negotiation Exception Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Indicates that an error occurred while negotiating the security context for a message.
public ref class SecurityNegotiationException : System::ServiceModel::CommunicationException
public class SecurityNegotiationException : System.ServiceModel.CommunicationException
[System.Serializable] public class SecurityNegotiationException : System.ServiceModel.CommunicationException
type SecurityNegotiationException = class inherit CommunicationException
[<System.Serializable>] type SecurityNegotiationException = class inherit CommunicationException
Public Class SecurityNegotiationException Inherits CommunicationException
- Inheritance
- SecurityNegotiationException
- Inheritance
- SecurityNegotiationException
- Attributes
-
Remarks
This exception can happen in the following cases:
While negotiating the initial security context. The exact error depends on the negotiation technology used: either Simple and Protected GSS-API Negotiation (SPNEGO) or TLSNEGO. For more information, see Security Protocols.
While establishing a security session on top of an initial security context.
During key renewal for an existing security session.
Security negotiation errors can occur as part of the Spnego/Sslnego security protocol or as part of the SecureConversation protocol.
|
https://learn.microsoft.com/en-us/dotnet/api/system.servicemodel.security.securitynegotiationexception?view=dotnet-plat-ext-5.0
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
netwire
Functional reactive programming library
See all snapshots
netwire appears in
netwire-5.0.3@sha256:52f0e6d59d0033441f70dc6c5789bf4c896654823a5e6a7249f58aed4b3f9b38,2180
Module documentation for 5.0.3
- Control
- FRP
Netwire
Netwire is a functional reactive programming (FRP) library with signal inhibition. It implements three related concepts, wires, intervals and events, the most important of which is the wire. To work with wires we will need a few imports:
import FRP.Netwire import Prelude hiding ((.), id)
The
FRP.Netwire module exports the basic types and helper functions.
It also has some convenience reexports you will pretty much always need
when working with wires, including
Control.Category. This is why we
need the explicit
Prelude import.
In general wires are generalized automaton arrows, so you can express
many design patterns using them. The
FRP.Netwire module provides a
proper FRP framework based on them, which strictly respects continuous
time and discrete event semantics. When developing a framework based on
Netwire, e.g. a GUI library or a game engine, you may want to import
Control.Wire instead.
Introduction
The following type is central to the entire library:
data Wire s e m a b
Don’t worry about the large number of type arguments. They all have very simple meanings, which will be explained below.
A value of this type is called a wire and represents a reactive
value of type
b, that is a value that may change over time. It may
depend on a reactive value of type
a. In a sense a wire is a function
from a reactive value of type
a to a reactive value of type
b, so
whenever you see something of type
Wire s e m a b your mind should
draw an arrow from
a to
b. In FRP terminology a reactive value is
called a behavior.
A constant reactive value can be constructed using
pure:
pure 15
This wire is the reactive value 15. It does not depend on other reactive values and does not change over time. This suggests that there is an applicative interface to wires, which is indeed the case:
liftA2 (+) (pure 15) (pure 17)
This reactive value is the sum of two reactive values, each of which is just a constant, 15 and 17 respectively. So this is the constant reactive value 32. Let’s spell out its type:
myWire :: (Monad m, Num b) => Wire s e m a b myWire = liftA2 (+) (pure 15) (pure 17)
This indicates that
m is some kind of underlying monad. As an
application developer you don’t have to concern yourself much about it.
Framework developers can use it to allow wires to access environment
values through a reader monad or to produce something (like a GUI)
through a writer monad.
The wires we have seen so far are rather boring. Let’s look at a more interesting one:
time :: (HasTime t s) => Wire s e m a t
This wire represents the current local time, which starts at zero when
execution begins. It does not make any assumptions about the time type
other than that it is a numeric type with a
Real instance. This is
enforced implicitly by the
HasTime constraint.
The type of this wire gives some insight into the
s parameter. Wires
are generally pure and do not have access to the system clock or other
run-time information. The timing information has to come from outside
and is passed to the wire through a value of type
s, called the state
delta. We will learn more about this in the next section about
executing wires.
Since there is an applicative interface you can also apply
fmap to a
wire to apply a function to its value:
fmap (2*) time
This reactive value is a clock that is twice as fast as the regular
local time clock. If you use system time as your clock, then the time
type
t will most likely be
NominalDiffTime from
Data.Time.Clock.
However, you will usually want to have time of type
Double or some
other floating point type. There is a predefined wire for this:
timeF :: (Fractional b, HasTime t s, Monad m) => Wire s e m a b timeF = fmap realToFrac time
If you think of reactive values as graphs with the horizontal axis
representing time, then the
time wire is just a straight diagonal line
and constant wires (constructed by
pure) are just horizontal lines.
You can use the applicative interface to perform arithmetic on them:
liftA2 (\t c -> c - 2*t) time (pure 60)
This gives you a countdown clock that starts at 60 and runs twice as fast as the regular clock. So it after two seconds its value will be 56, decreasing by 2 each second.
Testing wires
Enough theory, we wanna see some performance now! Let’s write a simple
program to test a constant (
pure) wire:
import Control.Wire import Prelude hiding ((.), id) wire :: (Monad m) => Wire s () m a Integer wire = pure 15 main :: IO () main = testWire (pure ()) wire
This should just display the value 15. Abort the program by pressing
Ctrl-C. The
testWire function is a convenience to examine wires. It
just executes the wire and continuously prints its value to stdout:
testWire :: (MonadIO m, Show b, Show e) => Session m s -> (forall a. Wire s e Identity a b) -> m c
The type signatures in Netwire are known to be scary. =) But like most of the library the underlying meaning is actually very simple. Conceptually the wire is run continuously step by step, at each step increasing its local time slightly. This process is traditionally called stepping.
As an FRP developer you assume a continuous time model, so you don’t observe this stepping process from the point of view of your reactive application, but it can be useful to know that wire execution is actually a discrete process.
The first argument of
testWire needs some explanation. It is a recipe
for state deltas. In the above example we have just used
pure (),
meaning that we don’t use anything stateful from the outside world,
particularly we don’t use a clock. From the type signature it is also
clear that this sets
s = ().
The second argument is the wire to run. The input type is quantified
meaning that it needs to be polymorphic in its input type. In other
words it means that the wire does not depend on any other reactive
value. The underlying monad is
Identity with the obvious meaning that
this wire cannot have any monadic effects.
The following application just displays the number of seconds passed since program start (with some subsecond precision):
wire :: (HasTime t s) => Wire s () m a t wire = time main :: IO () main = testWire clockSession_ wire
Since this time the wire actually needs a clock we use
clockSession_
as the second argument:
clockSession_ :: (Applicative m, MonadIO m) => Session m (Timed NominalDiffTime ())
It will instantiate
s to be
Timed NominalDiffTime (). This type
indeed has a
HasTime instance with
t being
NominalDiffTime. In
simpler words it provides a clock to the wire. At first it may seem
weird to use
NominalDiffTime instead of something like
UTCTime, but
this is reasonable, because time is relative to the wire’s start time.
Also later in the section about switching we will see that a wire does
not necessarily start when the program starts.
Constructing wires
Now that we know how to test wires we can start constructing more
complicated wires. First of all it is handy that there are many
convenience instances, including
Num. Instead of
pure 15 we can
simply write
15. Also instead of
liftA2 (+) time (pure 17)
we can simply write:
time + 17
This clock starts at 17 instead of zero. Let’s make it run twice as fast:
2*time + 17
If you have trouble wrapping your head around such an expression it may
help to read
a*b + c mathematically as
a(t)*b(t) + c(t) and read
time as simply
t.
So far we have seen wires that ignore their input. The following wire uses its input:
integral 5
It literally integrates its input value with respect to time. Its argument is the integration constant, i.e. the start value. To supply an input simply compose it:
integral 5 . 3
Remember that
3 really means
pure 3, a constant wire. The integral
of the constant 3 is
3*t + c and here
c = 5. Here is another
example:
integral 5 . time
Since
time denotes
t the integral will be
t^2/2 + c, again with
c = 5. This may sound like a complicated, sophisticated wire, but it’s
really not. Surprisingly there is no crazy algebra or complicated
numerical algorithm going on under the hood. Integrating over time
requires one addition and one division each frame. So there is nothing
wrong with using it extensively to animate a scene or to move objects in
a game.
Sometimes categorical composition and the applicative interface can be inconvenient, in which case you may choose to use the arrow interface. The above integration can be expressed the following way:
proc _ -> do t <- time -< () integral 5 -< t
Since
time ignores its input signal, we just give it a constant signal
with value
(). We name time’s value
t and pass it as the input
signal to
integral.
Intervals
Wires may choose to produce a signal only for a limited amount of time. We refer to those wires as intervals. When a wire does not produce, then it inhibits. Example:
for 3
This wire acts like the identity wire in that it passes its input signal through unchanged:
for 3 . "yes"
The signal of this wire will be “yes”, but after three seconds it will stop to act like the identity wire and will inhibit forever.
When you use
testWire inhibition will be displayed as “I:” followed by
a value, the inhibition value. This is what the
e parameter to
Wire is. It’s called the inhibition monoid:
for :: (HasTime t s, Monoid e) => t -> Wire s e m a a
As you can see the input and output types are the same and fully
polymorphic, hinting at the identity-like behavior. All predefined
intervals inhibit with the
mempty value. When the wire inhibits, you
don’t get a signal of type
a, but rather an inhibition value of type
e. Netwire does not interpret this value in any way and in most cases
you would simply use
e = ().
Intervals give you a very elegant way to combine wires:
for 3 . "yes" <|> "no"
This wire produces “yes” for three seconds. Then the wire to the left
of
<|> will stop producing, so
<|> will use the wire to its right
instead. You can read the operator as a left-biased “or”. The signal
of the wire
w1 <|> w2 will be the signal of the leftmost component
wire that actually produced a signal. There are a number of predefined
interval wires. The above signal can be written equivalently as:
after 3 . "no" <|> "yes"
The left wire will inhibit for the first three seconds, so during that
interval the right wire is chosen. After that, as suggested by its
name, the
after wire starts acting like the identity wire, so the left
side takes precedence. Once the time period has passed the
after wire
will produce forever, leaving the “yes” wire never to be reached again.
However, you can easily combine intervals:
after 5 . for 6 . "Blip!" <|> "Look at me..."
The left wire will produce after five seconds from the beginning for six seconds from the beginning, so effectively it will produce for one second. When you animate this wire, you will see the string “Look at me…” for five seconds, then you will see “Blip!” for one second, then finally it will go back to “Look at me…” and display that one forever.
Events
Events are things that happen at certain points in time. Examples
include button presses, network packets or even just reaching a certain
point in time. As such they can be thought of as lists of values
together with their occurrence times. Events are actually first class
signals of the
Event type:
data Event a
For example the predefined
never event is the event that never occurs:
never :: Wire s e m a (Event b)
As suggested by the type events contain a value. Netwire does not
export the constructors of the
Event type by default. If you are a
framework developer you can import the
Control.Wire.Unsafe.Event
module to implement your own events. A game engine may include events
for key presses or certain things happening in the scene. However, as
an application developer you should view this type as being opaque.
This is necessary in order to protect continuous time semantics. You
cannot access event values directly.
There are a number of ways to respond to an event. The primary way to
do this in Netwire is to turn events into intervals. There are a number
of predefined wires for that purpose, for example
asSoonAs:
asSoonAs :: (Monoid e) => Wire s e m (Event a) a
This wire takes an event signal as its input. Initially it inhibits,
but as soon as the event occurs for the first time, it produces the
event’s last value forever. The
at event will occur only once after
the given time period has passed:
at :: (HasTime t s) => t -> Wire s e m a (Event a)
Example:
at 3 . "blubb"
This event will occur after three seconds, and the event’s value will be
“blubb”. Using
asSoonAs we can turn this into an interval:
asSoonAs . at 3 . "blubb"
This wire will inhibit for three seconds and then start producing. It will produce the value “blubb” forever. That’s the event’s last value after three seconds, and it will never change, because the event does not occur ever again. Here is an example that may be more representative of that property:
asSoonAs . at 3 . time
This wire inhibits for three seconds, then it produces the value 3 (or a
value close to it) forever. Notice that this is not a clock. It does
not produce the current time, but the
time at the point in time when
the event occurred.
To combine multiple events there are a number of options. In principle you should think of event values to form a semigroup (of your choice), because events can occur simultaneously. However, in many cases the actual value of the event is not that interesting, so there is an easy way to get a left- or right-biased combination:
(at 2 <& at 3) . time
This event occurs two times, namely once after two seconds and once after three seconds. In each case the event value will be the occurrence time. Here is an interesting case:
at 2 . "blah" <& at 2 . "blubb"
These events will occur simultaneously. The value will be “blah”,
because
<& means left-biased combination. There is also
&> for
right-biased combination. If event values actually form a semigroup,
then you can just use monoidal composition:
at 2 . "blah" <> at 2 . "blubb"
Again these events occur at the same time, but this time the event value will be “blahblubb”. Note that you are using two Monoid instances and one Semigroup instance here. If the signals of two wires form a monoid, then wires themselves form a monoid:
w1 <> w2 = liftA2 (<>) w1 w2
There are many predefined event-wires and many combinators for
manipulating events in the
Control.Wire.Event module. A common events
is the
now event:
now :: Wire s e m a (Event a)
This event occurs once at the beginning.
Switching
We still lack a meaningful way to respond to events. This is where
switching comes in, sometimes also called dynamic switching. The
most important combinator for switching is
-->:
w1 --> w2
The idea is really straightforward: This wire acts like
w1 as long as
it produces. As soon as it stops producing it is discarded and
w2
takes its place. Example:
for 3 . "yes" --> "no"
In this case the behavior will be the same as in the intervals section, but with two major differences: Firstly when the first interval ends, it is completely discarded and garbage-collected, never to be seen again. Secondly and more importantly the point in time of switching will be the beginning for the new wire. Example:
for 3 . time --> time
This wire will show a clock counting to three seconds, then it will start over from zero. This is why we usually refer to time as local time.
Recursion is fully supported. Here is a fun example:
netwireIsCool = for 2 . "Once upon a time..." --> for 3 . "... games were completely imperative..." --> for 2 . "... but then..." --> for 10 . ("Netwire 5! " <> anim) --> netwireIsCool where anim = holdFor 0.5 . periodic 1 . "Hoo..." <|> "...ray!"
Changes
5.0.3: Maintenance release
- Fixed constraints for Semigroup-Monoid-Proposal
- Fixed flags for older GHCs
Contributors:
5.0.2: Maintenance release
- Moved to Git and GitHub.
- Relaxed profunctors dependency (finally).
- Moved language extensions into the individual modules.
- Minor style changes.
|
https://www.stackage.org/lts-18.8/package/netwire-5.0.3
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
- Java Tutorial
- Java - Home
- Java - Overview
- Java - Environment Setup
- Java - Basic Syntax
- Java - Object & Classes
- Java - Constructors
- Java - Basic Datatypes
- Java - Variable Types
- Java - Modifier Types
- Java - Basic Operators
- Java - Loop Control
- Java - Decision Making
- Java - Numbers
- Java - Characters
- Java - Strings
- Java - Arrays
- Java - Date & Time
- Java - Regular Expressions
- Java - Methods
- Java - Files and I/O
- Java - Exceptions
- Java - Inner classes
- Java Object Oriented
- Java - Inheritance
- Java - Overriding
- Java - Polymorphism
- Java - Abstraction
- Java - Encapsulation
- Java - Interfaces
- Java - Packages
- Java Advanced
- Java - Data Structures
- Java - Collections
- Java - Generics
- Java - Serialization
- Java - Networking
- Java - Sending Email
- Java - Multithreading
- Java - Applet Basics
- Java - Documentation
- Java Useful Resources
- Java - Questions and Answers
- Java - Quick Guide
- Java - Useful Resources
- Java - Discussion
- Java - Examples
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Java - Basic Operators
Get your Java dream job! Beginners interview preparation
85 Lectures 6 hours
Core Java bootcamp program with Hands on practice
99 Lectures 17 hours
The following table lists the bitwise operators −
Assume integer variable A holds 60 and variable B holds 13 then −
The Logical Operators
The following table lists the logical operators −
Assume Boolean variables A holds true and variable B holds false, then −
The Assignment Operators
Following are the assignment operators supported by Java language −
variable x = (expression) ? value if true : value if false
Following is an example −
Example
public class Test { public static void main(String args[]) { int a, b; a = 10; b = (a == 1) ? 20: 30; System.out.println( "Value of b is : " + b ); b = (a == 10) ? 20: 30; System.out.println( "Value of b is : " + b ); } }
This will produce the following result −
Output
Value of b is : 30 Value of b is : 20
instanceof Operator
This operator is used only for object reference variables. The operator checks whether the object is of a particular type (class type or interface type). instanceof operator is written as −
( Object reference variable ) instanceof (class/interface type)
If the object referred by the variable on the left side of the operator passes the IS-A check for the class/interface type on the right side, then the result will be true. Following is an example −
Example
public class Test { public static void main(String args[]) { String name = "James"; // following will return true since name is type of String boolean result = name instanceof String; System.out.println( result ); } }
This will produce the following result −
Output
true
This operator will still return true, if the object being compared is the assignment compatible with the type on the right. Following is one more example −
Example
class Vehicle {} public class Car extends Vehicle { public static void main(String args[]) { Vehicle a = new Car(); boolean result = a instanceof Car; System.out.println( result ); } }
This will produce the following result −
Output?
The next chapter will explain about loop control in Java programming. The chapter will describe various types of loops and how these loops can be used in Java program development and for what purposes they are being used.
|
https://www.tutorialspoint.com/java/java_basic_operators.htm
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
ping_construct − Constructor for the liboping class
#include <oping.h> pingobj_t *ping_construct (void); void ping_destroy (pingobj_t *obj);
The ping_construct constructor allocates the memory necessary for a liboping object, initializes that memory and returns a pointer to it.
The ping_destroy iterates over all hosts associated with the liboping object obj, closes the sockets, removes the hosts and frees obj’s memory.
The ping_construct constructor returns a pointer to the allocated memory or NULL if no memory could be allocated.
ping_setopt(3), ping_send(3), ping_host_add(3), ping_get_error(3), ping_iterator_get(3), liboping(3)
liboping is written by Florian octo Forster <octo at verplant.org>. Its homepage can be found at <>.
(c) 2005−2009 by Florian octo Forster.
|
http://man.m.sourcentral.org/f17/3+ping_construct
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Using absolute imports to better organize your React project is a great way. Relative imports are hard to follow and break during refactoring. Absolute imports manage your project easier as it grows. Forget long relative imports after this article. This is my 40th Medium article.
The Problem
What if your project’s folder structure is complex, and you need to go up in it? Inside of your components, you have imports that look like the below example with relative imports.
import {MyComponent} from ‘../../../../components/MyComponent’;
You can break the above import by changing the path of the component from which you are importing your
MyComponent. Let’s assume you decide to move
MyComponent into its own folder. Then you would need to update all of your imports in your project and add one extra
../ to all of your imports. Relative imports has some more problems.
Pretty hard to refactor
It becomes worse as you get further deeper into it.
You need to change the entire codebase if you need to extract the code to be used externally as an NPM module.
Absolute Imports
By using absolute imports, you can alias some folders to a name like below:
import {MyComponent} from ‘components/MyComponent’;
Absolute imports have some advantages.
There is no
../../../../hell. Therefore easier to type out the imports.
Easily copy-paste the code with imports into another file in the project and not have to tinker with import paths.
It is short and sweet
The below example is a file with Relative imports.
import React from "react"; import Button from "../../Button/Button"; import { LINKS, STRINGS } from "../../../utils/constants"; import styles from "./App.module.css"; function App() { return ( <div className={styles.App}> <Button>{STRINGS.HELLO}</Button> <a href={LINKS.HELP}>Learn more</a> </div> ); } export default App;
Make the imports in the above file prettier.
import React from "react"; import { LINKS, STRINGS } from "utils/constants"; import Button from "components/Button/Button"; import styles from "./App.module.css"; function App() { return ( <div className={styles.App}> <Button>{STRINGS.HELLO}</Button> <a href={LINKS.HELP}>Learn more</a> </div> ); } export default App;
Therefore, how can you use absolute imports with ReactJS?
Using TypeScript
If you need to set up absolute imports in your TypeScript application add/update your
tsconfig.json file in the root directory of the project. Then you need to update the compiler option
baseUrl in the file.
{ "compilerOptions": { "baseUrl": "src" }, "include": ["src"] }
Using JavaScript
Setting up absolute imports to Typescript and setting up absolute imports to JavaScript is pretty much the same process. Create the
jsconfig.json file in the root directory of the project. Then you need to update the following snippet.
{ "compilerOptions": { "baseUrl": "src" }, "include": ["src"] }
Now you can import your components like this.
import {MyComponent} from ‘components/MyComponent’;
You can also use the compiler option
paths as well. Perhaps you want to alias your
component folder. For that, you need to set up your
tsconfig.json, or
jsconfig.json as shown in below:
{ "compilerOptions": { "baseUrl": "./", "paths": { "@component/*": ["src/components/*"], } } }
Now you can import the components from your component folder like this:
import {MyComponent} from ‘@component/MyComponent’;
Is that enough?
Well, no… You need to make your IDE smart to understand absolute imports in your files. Here I am going to mention the progress for the top 2 IDEs. Those are VS Code and WebStrom.
For VS Code
VS Code is smart enough to understand the
tsconfig.json, or
jsconfig.json file. Intellisense and jump-to-source are just working fine with absolute imports.
Therefore, you can follow the above process.
For WebStrom / IntelliJ Idea
Select the src folder in the project window and right-click on it. Select the option Mark Directory as and then select the Resources Root option.
Now go to Settings -> Editor -> Code Style -> JavaScript and select the Imports tab. Then check the Use paths relative to the project, resource or sources roots.
Now WebStrom knows where the absolute imports are pointing. There won’t no warnings and autocomplete/ jump-to-source will work. This means the auto-import mechanism uses absolute imports.
If you are a strict developer like me, use something like Airbnb’s ESLint config.
With ESLint
Create React App also has an ESLint setup but it has a minimal set of rules. eslint-plugin-import is used by Airbnb and this plugin checks undefined imports. When you are going to use Airbnb’s ESLint config it will give you the error shown below:
You can fix the error by add
settings prop in your ESLint config. That setting prop point that your imports might be relative to
src folder. Therefore, you need to add update your ESLint config in
.eslintrc file like this:
"eslintConfig": { "extends": ["airbnb", "prettier", "plugin:jsx-a11y/recommended"], "settings": { "import/resolver": { "node": { "paths": ["src"], "extensions": [".js", ".jsx", ".ts", ".tsx"] } } } },
You don’t need to install any NPM modules to avoid the ESLint error, adding the
settings prop is enough.
By Convention
Absolute imports have been possible for a long time with Webpack. When you are naming your aliased folder, you need to use PascalCase/CamelCase because it is the convention follow in the Webpack.
Conclusion
Absolute imports might befuddle a new developer for a while. If they understand it, then it is pretty easy to use. Therefore, I suggest including a few lines about importing mechanisms in your Readme, or you might link to this article. I am not going to change any content after I publish this article. I hope you enjoyed this small trick to better organize your React project.
Have fun coding! 😃
|
https://plainenglish.io/blog/why-and-how-to-use-absolute-imports-in-react
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Difference between revisions of "User:Barre/MediaWiki/Extensions"
Latest revision as of 14:46, 9 May 2006
<kw_bread_crumbs>prefix=» |small=1|bgcolor=F9F9F9|trim_prefix=User:|on_top_hack=1</kw_bread_crumbs> MediaWiki, the software that runs this Wiki site, allows developers to write their own extensions to the Wiki markup. An extension defines an HTML/XML-style tag which can be used in the Wiki editor like any other markup. If you want to write your own extensions, check those resources.
So here it goes... <-- The links below give an error message: "Fatal error: Call to undefined function: setshowtoc() in /mounts/raid/projects/KitwareWeb/mediawiki-1.5.7-namic/extensions/kwIncludeFile.php on line 303"/ --JohnMcDonnell 10:46, 9 May 2006 (EDT)
- kw_bread_crumbs
<kw_include_file>url=*checkout*/scripts/media-wiki-extensions/kwBreadCrumbs.php?content-type=text%2Fplain&root=kwGridWeb%7Cpre=0%7Ccollapse_par=1%7Cpreg_match=/\/\*\s*=*Description=*\s*(.*?)\n\n/sm</kw_include_file>
- kw_include_file
<kw_include_file>url=*checkout*/scripts/media-wiki-extensions/kwIncludeFile.php?content-type=text%2Fplain&root=kwGridWeb%7Cpre=0%7Ccollapse_par=1%7Cpreg_match=/\/\*\s*=*Description=*\s*(.*?)\n\n/sm</kw_include_file>
- kw_site_map
<kw_include_file>url=*checkout*/scripts/media-wiki-extensions/kwSiteMap.php?content-type=text%2Fplain&root=kwGridWeb%7Cpre=0%7Ccollapse_par=1%7Cpreg_match=/\/\*\s*=*Description=*\s*(.*?)\n\n/sm</kw_include_file>
- kw_article_time_stamp
<kw_include_file>url=*checkout*/scripts/media-wiki-extensions/kwArticleTimeStamp.php?content-type=text%2Fplain&root=kwGridWeb%7Cpre=0%7Ccollapse_par=1%7Cpreg_match=/\/\*\s*=*Description=*\s*(.*?)\n\n/sm</kw_include_file>
Resources
- MediaWiki generated documentation
- Help For Mediawiki Hackers
- Database Layout and Schema
- Programming notes (Chris Phoenix, CRN)
- Wikimedia mailing lists
Cache Problem
As of MediaWiki 1.3 and 1.4, the extension feature is limited by the caching mechanism. If your extension is used to display dynamic contents and therefore needs to be re-executed each time the page is accessed, you will notice pretty early on that it does not work as expected. The problem is that MediaWiki caches the contents of the page the first time it is rendered, and serves that cached output until the corresponding page is modified again. Several parameters are considered when the decision is made to use the cached output instead of re-rendering the page, and most of them deal with comparing the creation time of the cache against the creation time of the page (cur_timestamp/cur_touched is the SQL database). If the cache is older than the page, it is re-rendered.
The code below uses that knowledge to 'touch' the page and invalidate its cache. It is indeed slightly identical to the code in the Title::invalidateCache() method. Sadly invalidateCache() can not be used in an extension: even though it sets cur_touched to 'now', at the time we would be calling this method we would still be in the process of creating and rendering the page itself and the page would be cached anyway once we would be done with our extension. At the end of the day the cache would always end up newer than cur_touched, defeating the whole purpose of calling invalidateCache(). The trick here is to set cur_touched in the future, something not too intrusive, say 'now' + 120 seconds, provided that we expect the whole page (and our extension code) to be executed and rendered within 120 seconds. That way, cur_touched remains 'fresher' than the cache, and the next time the page is accessed, the cache creation time will appear to be older than cur_touched, forcing the page to be re-rendered, and forcing cur_touched to be, again, set in the future and appear fresher than the new cache, etc.
$ts = mktime(); $now = gmdate("YmdHis", $ts + 120); $ns = $wgTitle->getNamespace(); $ti = wfStrencode($wgTitle->getDBkey()); $sql = "UPDATE cur SET cur_touched='$now' WHERE cur_namespace=$ns AND cur_title='$ti'"; wfQuery($sql, DB_WRITE, "");
|
https://public.kitware.com/Wiki/index.php?title=User:Barre/MediaWiki/Extensions&diff=cur&oldid=724
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Question:
Create the equivalent of a four-function calculator. The program should ask the user to enter a number, an operator, and another number. (Use floating point.) It should then carry out the specified arithmetical operation: adding, subtracting, multiplying, or dividing the two numbers. Use a switch statement to select the operation. Finally, display the result. When it finishes the calculation, the program should ask whether the user wants to do another calculation. The response can be ‘y’ or ‘n’. Some sample interaction with the program might look like this:
Enter first number, operator, second number: 10 / 3
Answer = 3.333333
Do another (y/n)? y
Enter first number, operator, second number: 12 + 100
Answer = 112
Do another (y/n)? n
Explanation:
Below mention code is compiled in Visual Studio 2015 and Code Blocks 13.12,output snap is attached.. If any problem you feel and you want some explanation feel free to contact us.
Code:
/**************************************************|
/*************C++ Programs And Projects************|
***************************************************/
#include <iostream>
using namespace std;
int main()
{
double n1, n2, ans;
char oper, ch;
do {
cout << "\nEnter first number, operator, second number : ";
cin >> n1 >> oper >> n2;
switch (oper)
{
case '+': ans = n1 + n2; break;
case '-': ans = n1 - n2; break;
case '*': ans = n1 * n2; break;
case '/': ans = n1 / n2; break;
default: ans = 0;
}
cout << "Answer = " << ans;
cout << "\nDo another(Enter ‘y’ or ‘n’) ? ";
cin >> ch;
} while (ch != 'n');
return 0;
}
Output:
Related Articles:
0 Questions:
|
https://www.cppexamples.xyz/2017/01/models-four-function-calculator.html
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Scrapy Beginners Series Part 3: Storing Data With Scrapy
In Part 1 and Part 2 of this Python Scrapy 5-Part Beginner Series we learned how to build a basic scrapy spider and get it to scrape some data from a website as well as how to clean up data as it was being scraped.
In Part 3 we will be exploring how to save the data into files/formats which would work for most common use cases. We'll be looking at how to save the data to a CSV or JSON file as well as how to save the data to a database or S3 bucket.. (This Tutorial)!
In this tutorial, Part 3: Storing Data With Scrapy we're going to cover:
- Using Feed Exporters
- Saving Data to a JSON or CSV file
- Saving Data to Amazon S3 Storage
- Saving Data to a Database
With the intro out of the way let's get down to business.
Need help scraping the web?
Then check out ScrapeOps, the complete toolkit for web scraping.
Using Feed Exporters
Scrapy already has a way to save the data to several different formats. Scrapy call's these ready to go export methods Feed Exporters.
Out of the box scrapy provides the following formats to save/export the scraped data:
- JSON file format
- CVS file format
- XML file format
- Pythons pickle format
The files which are generated can then be saved to the following places using a Feed Exporter:
- The machine Scrapy is running on (obviously)
- To a remote machine using FTP (file transfer protocall)
- To Amazon S3 Storage
- To Google Cloud Storage
- Standard output
In this guide we're going to give examples on how your can use Feed Exporters to store your data in different file formats and locations. However, there are many more ways you can store data with Scrapy.
Saving Data to a JSON or CSV File
We've already quickly looked at how to export the data to JSON and CSV in part one of this series but we'll quickly go over how to store the data to a JSON file and a CSV file one more time. Feel free to skip ahead if you know how to do this already!
To get the data to be saved in the most simple way for a once off job we can use the following commands:
Saving in JSON format
To save to a JSON file simply add the flag
-o to the
scrapy crawl command along with the file path you want to save the file to:
scrapy crawl chocolatespider -o my_scraped_chocolate_data.json
You can also define an absolute path like this:
scrapy crawl chocolatespider -O
Saving in CSV format
To save to a CSV file add the flag
-o to the
scrapy crawl command along with the file path you want to save the file to:
scrapy crawl chocolatespider -o my_scraped_chocolate_data.csv
You can also define an absolute path like this:
scrapy crawl chocolatespider -O
You can also decide whether to overwrite or append the data to the output file.
For example, when using the crawl or runspider commands, you can use the
-O option instead of
-o to overwrite the output file. (Be sure to remember the difference as this might be confusing!)
Saving Data to Amazon S3 Storage
Now that we have saved the data to a CSV file, lets save the created CSV files straight to an Amazon S3 bucket (You need to already have one setup).
You can check out how to set up an S3 bucket with amazon here:
OK- First we need to install Botocore which is an external Python library created by Amazon to help with connecting to S3.
pip3 install botocore
Now that we have that installed we can save the file to S3 by specifying the URI to your Amazon S3 bucket:
scrapy crawl chocolatespider -O s3://aws_key:aws_secret@mybucket/path/to/myscrapeddata.csv:csv
Obviously you will need to replace the
aws_key &
aws_secret with your own Amazon Key & Secret. As well as putting in your bucket name and file path. We need the
:csv at the end to specify the format but this could be
:json or
:xml.
You can also save the
aws_key &
aws_secret in your project settings file:
AWS_ACCESS_KEY_ID = 'myaccesskeyhere'
AWS_SECRET_ACCESS_KEY = 'mysecretkeyhere'
Note: When saving data with this method the AWS S3 Feed Exporter uses delayed file delivery. This means that the file is first temporarily saved locally to the machine the scraper is running on and then it's uploaded to AWS once the spider has completed the job.
Saving Data to MySQL and PostgreSQL Databases
Here well show you how to save the data to MySQL and PostgreSQL databases. To do this we'll be using Item Pipelines again.
For this we are presuming that you already have a database setup called
chocolate_scraping.
For more information on setting up a MySQL or Postgres database check out the following resources:
Windows: MySQL - Postgres -
Mac: MySQL - Postgres -
Ubuntu: MySQL - Postgres -
Saving data to a MySQL database
We are assuming you have already have a database setup and a table called
chocolate_products in your DB.
If not you can login to your database and run the following command to create the table:
CREATE TABLE IF NOT EXISTS chocolate_products (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255),
price VARCHAR(255),
url TEXT
);
To save the data to the databases we're again going to be using the Item Pipelines. If you don't know what they are please check out part 2 of this series where we go through how to use Scrapy Item Pipelines!
The first step in our new Item Pipeline class, as you may expect is to connect to our MySQL database and the table in which we will be storing our scraped data.
We are going to need to install the
mysql package for Python.
pip install mysql
If you already have mysql installed on your computer - you might only need the connection package.
pip install mysql-connector-python
Then create a Item pipeline in our
pipelines.py file that will connect with the database.
import mysql.connector
class SavingToMySQLPipeline(object):
def __init__(self):
self.create_connection()
def create_connection(self):
self.conn = mysql.connector.connect(
host = 'localhost',
user = 'root',
password = '123456',
database = 'chocolate_scraping'
)
self.curr = self.conn.cursor()
Now that we are connecting to the database, for the next part we need to save each chocolate product we scrape into our database item by item as they are processed by Scrapy.
To do that we will use the scrapy
process_item() function (which runs after each item is scraped) and then create a new function called
store_in_db in which we will run the MySQL command to store the Item data into our
chocolate_products table.
import mysql.connector
class SavingToMySQLPipeline(object):
def __init__(self):
self.create_connection()
def create_connection(self):
self.connection = mysql.connector.connect(
host = 'localhost',
user = 'root',
password = '123456',
database = 'chocolate_scraping'
)
self.curr = self.connection.cursor()
def process_item(self, item, spider):
self.store_db(item)
#we need to return the item below as Scrapy expects us to!
return item
def store_db(self, item):
self.curr.execute(""" insert into chocolate_products ( name, price, url) values (%s,%s,%s)""", (
item["name"],
item["price"],
item["url"]
))
self.connection.commit()ToMySQLPipeline': 300,
}
Saving data to a PostgreSQL database
As in the above section - we are assuming you have already have a postgres database setup and you have created a table called
chocolate_products in your DB.
If not you can login to your postgres database and run the following command to create the table:
CREATE TABLE IF NOT EXISTS chocolate_products (
id SERIAL PRIMARY KEY,
name VARCHAR(255),
price VARCHAR(255),
url TEXT
);
To save the data to a PostgreSQL database the main thing we need to do is to update how the connection is created. To do so we will will install the Python package
psycopg2.
pip install psycopg2
And update the connection library in our function.
import psycopg2
class SavingToPostgresPipeline(object):
def __init__(self):
self.create_connection()
def create_connection(self):
self.connection = psycopg2.connect(
host="localhost",
database="chocolate_scraping",
user="root",
password="123456")
self.curr = self.connection.cursor()
def process_item(self, item, spider):
self.store_db(item)
#we need to return the item below as scrapy expects us to!
return item
def store_db(self, item):
try:
self.curr.execute(""" insert into chocolate_products (name, price, url) values (%s, %s, %s)""", (
item["name"],
item["price"],
item["url"]
))
except BaseException as e:
print(e)
self.connection.commit()
AgainToPostgresPipeline': 300,
}
After running our spider again we should be able to see the data in our database if we run a simple select command like the following(after logging into our database!):
select * from chocolate_products;
Next Steps
We hope you now have a good understanding of how to save the data you've scraped into the file or database you need! If you have any questions leave them in the comments below and we'll do our best to help out!
If you would like the code from this example please check it out on Github.
The next tutorial covers how to make our spider production ready by managing our user agents & IPs so we don't get blocked. (Part 4)
Need a Free Proxy? Then check out our Proxy Comparison Tool that allows to compare the pricing, features and limits of every proxy provider on the market so you can find the one that best suits your needs. Including the best free plans.
|
https://scrapeops.io/python-scrapy-playbook/scrapy-beginners-guide-storing-data/
|
CC-MAIN-2022-40
|
en
|
refinedweb
|
Syntax of XAML
XAML is a markup language derived from XML. The graphic components are defined by open or closed tags, with attributes.
Reserved words are capitalized.
Example of tag with contents:
<Button> Click </Button>
And without contents:
<Button/>
The attributes, as variables, are assigned by a value. In XML these values
are put between quotes, contrary to the contents which is either a plain text,
or one or more other tags.
For example we define a 100 pixels width button, with the Width attribute:
<Button Width="100" /> <Button Width="100"> Click </Button>
We will see that the tags can contain other tags, and even as the attributes can become tags.
Syntax of properties
Properties of an object, saying what characterizes it, can be written in the form of attributes. For example the property of the background color of a rectangle is given with the Fill attribute:
<Rectangle Fill="Red"/>
In order to describe complex properties, XAML has an alternative format called
"Property element syntax", which extends the syntax of XML and gives
to the dot a special meaning.
In XML, the value of an attribute must be a string of characters. In XAML, it can be another object of the language.
But object is not directly assigned with the attribute by the equal sign, it is associated with a point according to syntax specific to XAML with the form:
ElementName.PropertyName
Let us take again the example of the Rectangle object and the Fill property, for the filling color, the attribute becomes a tag:
<Rectangle> <Rectangle.Fill> </Rectangle.Fill> </Button>
That makes it possible to add tags and attributes to the Fill property, such as for example a texture made with a picture, which one will see later in this manual.
Another example is provided by the specification of the language, that of the button to which is associated a menu list:
<Button> <Button.ContextMenu> <ContextMenu> <MenuItem> Open </MenuItem> <MenuItem> Close </MenuItem> </ContextMenu> </Button.ContextMenu> </Button>
ContextMenu, which is a list of menus, becomes property of button thanks to Button.ContextMenu, and is declared inside the definition of the button.
Contents of a tag can be expressed as a property. Thus the text of the button is a property which is written in contents or as value of the content attribute:
<Button> Click </Button
<Button Content="Click"/>
Namespaces
Namespaces are denoted as attributes of the main global container of the
XAML file, Canvas or Window, or Page. They are URL preset which will be given
in the examples and which correspond to the type of XAML definition.
Example:
<Canvas xmlns="" xmlns:
For namespaces other than the default space (the first line), the prefix (as x above) must precede each element of this namespace. For example:
x: elementName
For each element of the x namespace.
Attached properties
It is a concept specific to XAML. Syntax is the same one as for the elements of property seen higher, a name of property is connected to a name of a type (rather than to a name of element such as Button).
typeName.propertyName
The goal is to add properties to a type. The elements of this type will be able to then have the properties thus defined.
Attached events
In XAML, an event can be defined by a type, while the event handlers will
be attached to the objects represented by tags (such as Button).
The sSyntax is still the same:
TypeName.EventName
Extending the language
It is possible to extend the XAML language thanks to a particular syntax:
the extension is inserted between { } , made up of the name of a class followed
by the instance.
Example taken in the specification of the language:
<Button Style="{StaticResource MyStyle}"> Click </Button>
The StaticResource class contains the definitions added, and the MyStyle
instance becomes a property of Button.
We will be able to use Button.MyStyle in the definition of the button and to benefit of the new implemented features in the class.
Characteristics and root tags: Canvas, Page, Application
Case-sensitivity
XAML is case-sensitive. Capital first letters must be preserved.
That does not apply necessarily to values of attributes, thus true and True are acceptable, if the parser support them.
White spaces
Extra spaces are ignored, as well as special characters as the tab code, that are equivalent to spaces.
Root tag
Like any XML document, a XAML definition must be included into a single
tag, the root element.
For a WPF page, the container is the Page or Window tag.
For Silverlight, it is Canvas.
For an application it is Application.
|
https://www.scriptol.com/xaml/syntax.php
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Compile the following code:
class Base { protected final void finalize() { System.out.println("Base.finalize"); }}class Derived extends Base { private void fin_lize() { System.out.println("Derived.finalize"); } public static void main(String[] args) { new Derived(); System.gc(); System.runFinalization(); }}
Now patch Derived.class with a hex editor to change fin_lize to finalize. Run with OpenJDK or Oracle JRE/JDK and observe that Derived.finalize is printed.
This happens because the finalize method is called via JNI reflection and the method name is resolved against the real object type instead of java.lang.Object. The OpenJDK code can be seen here.
A better way to do this would be to add an invokeFinalize method to JavaLangAccess. This avoids the expense of native code and reflection.
Yesterday.
While I was working on rewriting IKVM's dynamic binding support based on method handles I stumbled into a rather serious bug in the Oracle Java implementation. It allowed any code to overwrite public final fields. This has been fixed in Update 21.
Below is a proof of concept that disables the security manager. It works by changing Double.TYPE to Integer.TYPE and then using reflection to copy an integer field from one object to another, but because of the patched TYPE fields reflection thinks the integer field is a double and copies 8 bytes instead of 4.
import java.lang.invoke.MethodHandle;import java.lang.reflect.Field;import static java.lang.invoke.MethodHandles.lookup;class Union1 { int field1; Object field2;}class Union2 { int field1; SystemClass field2;}class SystemClass { Object;}class PoC { public static void main(String[] args) throws Throwable { System.out.println(System.getSecurityManager()); disableSecurityManager(); System.out.println(System.getSecurityManager()); } static void disableSecurityManager() throws Throwable { MethodHandle mh1, mh2; mh1 = lookup().findStaticSetter(Double.class, "TYPE", Class.class); mh2 = lookup().findStaticSetter(Integer.class, "TYPE", Class.class); Field fld1 = Union1.class.getDeclaredField("field1"); Field fld2 = Union2.class.getDeclaredField("field1"); Class classInt = int.class; Class classDouble = double.class; mh1.invokeExact(int.class); mh2.invokeExact((Class)null); Union1 u1 = new Union1(); u1.field2 = System.class; Union2 u2 = new Union2(); fld2.set(u2, fld1.get(u1)); mh1.invokeExact(classDouble); mh2.invokeExact(classInt); if (u2.field2.f29 == System.getSecurityManager()) { u2.field2.f29 = null; } else if (u2.field2.f30 == System.getSecurityManager()) { u2.field2.f30 = null; } else { System.out.println("security manager field not found"); } }}
The first release candidate is available. It can be downloaded here or from NuGet.
What's New (relative to IKVM.NET 7.2):
Changes since previous development snapshot:
Binaries available here: ikvmbin-7.3.4830.0.zip
Sources: ikvmsrc-7.3.4830.0.zip, openjdk-7u6-b24-stripped.zip
My burst of inspiration ended. So I guess it's time to do a release soon.
Changes:
Binaries available here: ikvmbin-7.3.4826.zip); }}
Binaries available here: ikvmbin-7.3.4817.zip
Another week, another snapshot.
Binaries available here: ikvmbin-7.3.4811.zip
I finally created a github repository for ikdasm. A couple of weeks ago I fixed some ildasm compatibility issues and changed pinvokeimpl and marshalas handling to use the IKVM.Reflection specific APIs instead of decoding the pseudo custom attributes (I also used the new IKVM.Reflection feature to disable generating pseudo custom attributes). I added external module support and fixed some other small issues.
The primary purpose of ikdasm is to make sure that the IKVM.Reflection API is complete. I believe it now exposes all relevant Managed PE file features. The secondary feature is to test IKVM.Reflection and to make this easy ikdasm replicates (almost) all ildasm quirks to enable comparing the output files. I disassembled a large number of files (including C++/CLI and Managed C++ files) and compared the results.
Only a small subset of ildasm functionality has been cloned. There is no GUI and most command line options are also not implemented.
Pull requests are welcome.
Still more changes to better support what I'll start calling "mixed mode" (i.e. ikvmc compiled assemblies that use dynamically loaded classes or use dynamic binding to classes in another assembly).
Another change is that runtime stub class generation is now based on the ikvmstub class file writer, instead of the very old code that was reflection based. This means that stubs can now acurately be generated even when some of the types involved are not available.
Binaries available here: ikvmbin-7.3.4804.zip
A quick update because the previous snapshot had a bug that caused ikvmc to be completely broken on the CLR x86 JIT.
Binaries available here: ikvmbin-7.3.4799.zip
|
http://weblog.ikvm.net/default.aspx?date=2013-05-30
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Subject: Re: [Boost-bugs] [Boost C++ Libraries] #6145: MinGW / GCC 4.6.1 warns about conflicts between Boost.Thread and Boost.InterProcess (Win32 C API)
From: Boost C++ Libraries (noreply_at_[hidden])
Date: 2014-11-06 04:14:10
#6145: MinGW / GCC 4.6.1 warns about conflicts between Boost.Thread and
Boost.InterProcess (Win32 C API)
-------------------------------------+-------------------------------------
Reporter: Cyril Othenin-Girard | Owner: igaztanaga
<cog@â¦> | Status: new
Type: Bugs | Component: interprocess
Milestone: To Be Determined | Severity: Problem
Version: Boost 1.48.0 | Keywords: MinGW interprocess
Resolution: | thread warning conflict
-------------------------------------+-------------------------------------
Comment (by gau_veldt@â¦):
Replying to [comment:2 gau_veldt@â¦]:
> I still get these warnings in mingw/gcc 4.7.1 on boost 1.56.0 when
#include <boost/thread/mutex.hpp> and friends
Hmm... the conflict warnings disappear when I don't use
boost::interprocess and friends in conjunction with boost::thread and
friends. I guess the ipc and threading headers are incompatible?
-- Ticket URL: <> Boost C++ Libraries <> Boost provides free peer-reviewed portable C++ source libraries.
This archive was generated by hypermail 2.1.7 : 2017-02-16 18:50:17 UTC
|
https://lists.boost.org/boost-bugs/2014/11/38817.php
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
graphql-tag library.
For instance, in GitHunt, we want to display the current user (if logged in) in
the
Profile component:
import { Component, OnInit, OnDestroy } from '@angular/core'; import { Subscription } from 'rxjs'; import { Apollo } from 'apollo-angular'; import gql from 'graphql-tag'; // We use the gql tag to parse our query string into a query document const CurrentUserForProfile = gql` query CurrentUserForProfile { currentUser { login avatar_url } } `; @Component({ ... }) class ProfileComponent implements OnInit, OnDestroy { loading: boolean; currentUser: any; private querySubscription: Subscription; constructor(private apollo: Apollo) {} ngOnInit() { this.querySubscription = this.apollo.watchQuery<any>({ query: CurrentUserForProfile }) .valueChanges .subscribe(({ data, loading }) => { this.loading = loading; this.currentUser = data.currentUser; }); } ngOnDestroy() { this.querySubscription.unsubscribe(); } }
currentUser, the field we've picked out in
CurrentUserForProfile.
We can expect the
data.currentUser to change as the logged-in-ness of the
client and what it knows about the current user changes over time. That
information is stored in Apollo Client's global cache, so if some other query
fetches new information about the current user, this component will update to
remain consistent.
It's also possible to fetch data only once. The
query method of
Apollo
service returns an
Observable that also resolves with the same result as
above.. More about that in
Static Typing.
Providing
options
watchQuery and
query methods expect one argument, an object with options. If
you want to configure the query, you can provide any available option in the
same object where the
query key lives.
If your query takes variables, this is the place to pass them in:
// Suppose our profile query took an avatar size const CurrentUserForProfile = gql` query CurrentUserForProfile($avatarSize: Int!) { currentUser { login avatar_url(avatarSize: $avatarSize) } } `; @Component({ template: ` Login: {{currentUser?.profile}} `, }) class ProfileComponent implements OnInit, OnDestroy { currentUser: any; private querySubscription: Subscription; ngOnInit() { this.querySubscription = this.apollo .watchQuery({ query: CurrentUserForProfile, variables: { avatarSize: 100, }, }) .valueChanges.subscribe(({data}) => { this.currentUser = data.currentUser; }); } ngOnDestroy() { this.querySubscription.unsubscribe(); } }.
This is why we created
SelectPipe. The only argument it receives is the name
of the property you want to get from
data.
import {Component, OnInit} from '@angular/core'; import {Apollo} from 'apollo-angular'; import {Observable} from 'rxjs';; } }
The result of the query has this structure:
{ "data": { "currentUser": { ... }, "feed": [ ... ] } }
Without using
SelectPipe, you would get the whole object instead of only the
data.feed.
Using with RxJS
Apollo is compatible with RxJS by using same Observable so it can be used with
operators.
What's really interesting is that, because of this, you can avoid using
SelectPipe:
import {Component, OnInit} from '@angular/core'; import {Apollo} from 'apollo-angular'; import {Observable} from 'rxjs'; import {map} from 'rxjs/operators';.pipe(map(({data}) => data.feed)); } }
The
map operator we are using here is provided by the RxJS
Observable which
serves as the basis for the
Observable.
To be able to use the
map operator (and most others like
switchMap,
filter,
merge, ...) these have to be explicitly imported as done in the
example:
import {map} from 'rxjs/operators'.
|
https://www.apollographql.com/docs/angular/basics/queries/
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
.Right now, 4k cpu's is known broken because of the stack usage. I'm not willing to debug more of these kinds of stack smashers, they're really nasty to work with. I wonder how many other random failures these have been involved with?This patch also makes the ifdef mess in Kconfig much cleaner and avoids duplicate definitions by just conditionally suppressing the question and giving higher defaults. We can enable MAXSMP and raise the CPU limits some time in the future. But that future is not going to be before 2.6.27 - the code simply isn't ready for it.The reason I picked 512 CPU's as the limit is that we _used_ to limit things to 255. So it's higher than it used to be, but low enough to still feel safe. Considering that a 4k-bit CPU mask (512 bytes) _almost_ worked, the 512-bit (64 bytes) masks are almost certainly fine.Still, sane people should limit their NR_CPUS to 8 or 16 or something like that. Very very few people really need the pain of big NR_CPUS. Not even "just" 512 CPU's.Travis, Ingo and Thomas cc'd, since they were involved in the original commit (1184dc2ffe2c8fb9afb766d870850f2c3165ef25) that raised the limit. Linus--- arch/x86/Kconfig | 30 ++++++++---------------------- 1 files changed, 8 insertions(+), 22 deletions(-)diff --git a/arch/x86/Kconfig b/arch/x86/Kconfigindex 68d91c8..ed92864 100644--- a/arch/x86/Kconfig+++ b/arch/x86/Kconfig@@ -577,35 +577,29 @@ config SWIOTLB config IOMMU_HELPER def_bool (CALGARY_IOMMU || GART_IOMMU || SWIOTLB || AMD_IOMMU)+ config MAXSMP bool "Configure Maximum number of SMP Processors and NUMA Nodes"- depends on X86_64 && SMP+ depends on X86_64 && SMP && BROKEN default n help Configure maximum number of CPUS and NUMA Nodes for this architecture. If unsure, say N. -if MAXSMP-config NR_CPUS- int- default "4096"-endif--if !MAXSMP config NR_CPUS- int "Maximum number of CPUs (2-4096)"- range 2 4096+ int "Maximum number of CPUs (2-512)" if !MAXSMP+ range 2 512 depends on SMP+ default "4096" if MAXSMP default "32" if X86_NUMAQ || X86_SUMMIT || X86_BIGSMP || X86_ES7000 default "8" help This allows you to specify the maximum number of CPUs which this- kernel will support. The maximum supported value is 4096 and the+ kernel will support. The maximum supported value is 512 and the minimum value which makes sense is 2. This is purely to save memory - each supported CPU adds approximately eight kilobytes to the kernel image.-endif config SCHED_SMT bool "SMT (Hyperthreading) scheduler support"@@ -996,17 +990,10 @@ config NUMA_EMU into virtual nodes when booted with "numa=fake=N", where N is the number of nodes. This is only useful for debugging. -if MAXSMP- config NODES_SHIFT- int- default "9"-endif--if !MAXSMP-config NODES_SHIFT- int "Maximum NUMA Nodes (as a power of 2)"+ int "Maximum NUMA Nodes (as a power of 2)" if !MAXSMP range 1 9 if X86_64+ default "9" if MAXSMP default "6" if X86_64 default "4" if X86_NUMAQ default "3"@@ -1014,7 +1001,6 @@ config NODES_SHIFT help Specify the maximum number of NUMA Nodes available on the target system. Increases memory reserved to accomodate various tables.-endif config HAVE_ARCH_BOOTMEM_NODE def_bool y
|
http://lkml.org/lkml/2008/8/25/331
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
ControlCollection Constructor (Control)
.NET Framework (current version)
Namespace: System.Web.UI
Initializes a new instance of the ControlCollection class for the specified parent server control.
Assembly: System.Web (in System.Web.dll)
The following code example is a custom ControlCollection class that overrides the constructor to write messages (which include the name of the Owner property) to the trace log when an instance of the collection is created. You must enable tracing for the page or application for this example to work.
// Create a custom ControlCollection that writes // information to the Trace log when an instance // of the collection is created. [AspNetHostingPermission(SecurityAction.Demand, Level=AspNetHostingPermissionLevel.Minimal)] public class CustomControlCollection : ControlCollection { private HttpContext context; public CustomControlCollection(Control owner) : base(owner) { HttpContext.Current.Trace.Write("The control collection is created."); // Display the Name of the control // that uses this collection when tracing is enabled. HttpContext.Current.Trace.Write("The owner is: " + this.Owner.ToString()); } }
Return to top
.NET Framework
Available since 1.1
Available since 1.1
Show:
|
https://msdn.microsoft.com/en-us/library/system.web.ui.controlcollection.controlcollection.aspx
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Blokkal::Ui::CheckBoxDelegate Class Reference
#include <checkboxdelegate.h>
Detailed Descriptionshort A delegate that paints the checkbox to a custom location
A delegate that paints the checkbox to a custom location
Definition at line 37 of file checkboxdelegate.h.
Constructor & Destructor Documentation
Constructor
Definition at line 41 of file checkboxdelegate.cpp.
Member Function Documentation
Returns the position of the check box for index The rect is invalid if a checkbox is not appropriate for index.
- Returns:
- the position of the check box for index
Definition at line 66 of file checkboxdelegate.cpp.
Returns the size of the the checkbox
- Returns:
- the size of the the checkbox
Definition at line 61 of file checkboxdelegate.cpp.
Reimplemented to use the virtual method checkBoxRect() instead of the check() as in the QItemDelegate implementation
Definition at line 123 of file checkboxdelegate.cpp.
This method does the layout for a simple item with a checkbox (if appropriate for index ) and a text that may be drawn into textRect
Definition at line 177 of file checkboxdelegate.cpp.
Paints the checkbox in the rect returned by checkBoxRect(), if that rect is valid.
Definition at line 97 of file checkboxdelegate.cpp.
Sets the view for this delegate to view
Definition at line 51 of file checkboxdelegate.cpp.
Returns the size hint. This is the size hint for the checkbox. If the rect returned by checkBoxRect is invalid, then this method will return a 0 size.
Definition at line 113 of file checkboxdelegate.cpp.
This method is called when the checkstate of index has changed to newState. The default implementation does nothing.
Definition at line 200 of file checkboxdelegate.cpp.
Returns a pointer to the view.
- Returns:
- a pointer to the view
Definition at line 56 of file checkboxdelegate.cpp.
The documentation for this class was generated from the following files:
|
http://blokkal.sourceforge.net/docs/0.1.0/classBlokkal_1_1Ui_1_1CheckBoxDelegate.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Results 1 to 2 of 2
- Join Date
- Jul 2014
- 2
- Thanks
- 0
- Thanked 0 Times in 0 Posts
ssl python client server and verification
Greetings
Currently I am working on a prototype just for my personal uses of a 2 step authentication
I am in need of writing it in python because it will be an add on to a program that is written in python as well.
What needs to happen is I have 2 clients (these are mockups)
1 desktop client
1 phone client
1 server
The server will be listening on an SSL data socket on some port say 55576 (doesn't matter) it needs to handle multiple ssl connections at a time
The desktop client will open up and then try to make a secure connection to the server, It then sends a "verification" message which the server checks to see if this is actually the desktop. This can be hardcoded
The 2nd client is then initiated the (phone) and it makes a SSL connection also and a verification message also.
Currently what I have running is a multi thread server with a desktop and "phone" and a server but it is not in SSL because I do not know how to encapsulate all of it. Along with I don't have any verification.
Can someone help me with the SSL certs and encapsulation and the verification?
honestly not that good at programming
Here is a mock up of a verification process that I want to implement into the server but I dont know where to add it nor how to get it to function 100% because within the client when it connects it should automatically send 1 message that is hardcoded into it that would be "client1desktop or client2mobile" and based on the first message it would check to see if it matches if it doesn't it disconnects
How would i edit the code in the client to autosend a specified message and after the server checks how do i get out of the loop of the authentication/verification just so you can continue on
[code]
(intromessage would be the first message the client sends but this would be done automatically )
auth = intromessage
if auth == client1desktop:
print "Client Desktop Accepted"
elif auth == client2mobile:
print "Client Mobile Accepted"
else:
print "No valid client detected"
print "Good bye!"
(force discconect from server)
[code]
Here is the code for the server
Code:
from socket import * import thread, ssl def handler(clientsocket, clientaddr): print "Accepted connection from: ", clientaddr while 1: data = clientsocket.recv(1024)()
Here is the code for the client (they are both the exact same)
Code:
from socket import * if __name__ == '__main__': host = 'localhost' port = 55567 buf = 1024 addr = (host, port) clientsocket = socket(AF_INET, SOCK_STREAM) clientsocket.connect(addr) while 1: data = raw_input(">> ") if not data: break else: clientsocket.send(data) data = clientsocket.recv(buf) if not data: break else: print data clientsocket.close()
Also see diagram which may help in understanding what I need help with
------------------------
Deleted by eenookami, 500 points refunded - Sat Jul 12 2014 07:21:17 GMT+0930 (CST)
- Join Date
- Jul 2014
- 2
- Thanks
- 0
- Thanked 0 Times in 0 Posts
whoops forgot the attachment
Also why can't i edit my own post?
|
http://www.codingforums.com/python/325936-ssl-python-client-server-verification.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Best way to warn user a supplier invoice is already entered (match on supplier invoice number)
What would be the best way to implement a "duplicate check" when users are recording a supplier invoice?
The aim would be to avoid accountants entering the same invoice twice, based on the supplier invoice number.
It shouldn't prevent them from entering the invoice - just warn them that an existing invoice is already in the system with the same number.
constraints prevent duplicate entry, the objective here is just to detect a potential duplicate and warn the user - letting them enter it again if they need to.
I ended up writing a function:
def onchange_supplier_invoice(self, cr, uid, ids, reference, context=None): msg = reference + 'has already been used!' res = {'reference': reference, } if not reference: return res # Get all Supplier Invoices invoice_ids = self.search(cr, uid, [('type','=','in_invoice')], context) # Get all the references for all Supplier Invoices invoices = self.read(cr, uid, invoice_ids, fields=['reference'],context=context) # Check for duplicates for inv in invoices: if inv['reference'] == reference: raise osv.except_osv('Possible duplicate Invoice!',msg) return res
And triggering the function via on_change in the relevant field (in this case reference).
<field name="reference" string="Invoice Number" attrs="{'required':[('state','!=','draft')]}" on_change="onchange_supplier_invoice(reference)"/>
Depending on the use case, searching for Supplier Invoices matching the current supplier (if populated) may be better.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/best-way-to-warn-user-a-supplier-invoice-is-already-entered-match-on-supplier-invoice-number-19898
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Useful for deploy server installation. Adds management command to make configs for your project. Automatically recognizes media directories in 3-party applications
- Adds management command to make configs for your project. Now it can generate config for lighttpd, logrotate, monit and deploy scripts.
- Automatically recognizes media directories in 3-party applications and takes them into account.
Installation:
In settings.py:
Put config to your INSTALLED_APPS.
Set domain names for your project
CONFIG_SITES = ['', ]
Domains for which you want redirects to your site
CONFIG_REDIRECTS = ['project-name.com', ]
Serving static files
Set path to media for unusual 3-party application
CONFIG_APP_MEDIA = { 'application-name': [ ('media-root', 'media-url', ), ] }
Media folders with same name as application modulde will be added automatically. For example, in tinymce module media files
tinymce/ media/ tinymce/ js/tinymce.js css/style.css
will be available at url:
/media/tinymce/js/tinymce.js /media/tinymce/css/style.js
Stop! Aren’t Django staticfiles do that?
Yes, they do. But django-servre-config is older than staticfiles and does same job. This feature will is deprecated and will not be supported since 0.2.x release. We recommend to use Django contrib application django.contrib.statifiles. Read below about how to do it.
In urls.py:
If you use django-server-config for serving static media, add following code to the urls.py for serve static files in debug mode. Add it BEFORE django.views.static.serve
if settings.DEBUG: urlpatterns += patterns('', (r'^', include('config.urls')))
In buildout.cfg:
If you are using zc.buildout, you can add to your parts make-config to make config files automaticaly:
[make-config] recipe = iw.recipe.cmd on_install = true on_update = true cmds = sudo rm -f bin/init.d bin/lighttpd bin/logrotate bin/monit bin/*.py bin/django make_config init.d > bin/init.d bin/django make_config lighttpd > bin/lighttpd bin/django make_config logrotate > bin/logrotate bin/django make_config monit > bin/monit # Enable backups with duply & duplicity () bin/django make_config duply_conf > bin/duply_conf bin/django make_config duply_pre > bin/duply_pre bin/django make_config duply_post > bin/duply_post bin/django make_config duply_exclude > bin/duply_exclude # Collect static automaticaly sudo rm -Rf static bin/django collectstatic -l ---noinput sudo chown www-data:www-data -R static bin/django make_config install.py > bin/install.py bin/django make_config uninstall.py > bin/uninstall.py bin/django make_config enable.py > bin/enable.py bin/django make_config disable.py > bin/disable.py sudo chown root:root bin/* sudo chmod ug=rw,o=r bin/* sudo chmod ug=rwx,o=rx bin/init.d bin/django bin/buildout echo Configs were saved to "bin/"
Without bulidout
If you are not using zc.buildlout, you can add to repository shell script with commands above, it will give same effect.
Staticfiles support
Since 0.1.1 server-config supports django.contrib.staticfiles and staticfiles apps. If one of them present in INSTALLED_APPS, config for webserver will be generated with appropriate rewrite rule.
If staticfiles is used there is no need to include config.urls in urlconf.py. On the other hand, probably you will want to include staticfiles_urlpatterns() from staticfiles app (see: django documentation about it)
from django.contrib.staticfiles.urls import staticfiles_urlpatterns urlpatterns += staticfiles_urlpatterns()
Duply/Duplicity backups
Django-server-config can automatically create backups configuration files. It supports duply (duplicity) configuration scheme. Duplicity is backup system written in python and using rsync algorithm and Duply is bash configuration wrapper for Duplicity.
Backup settings
Security Note
To start using backups you should specify path to main configuration file for duply. Django-server-config expects file in *.ini format. This file can contains secret passwords, so file supposed to be located somewhere in /etc/duply/conf.ini and belongs to root (superuser).
- BACKUP_DUPLY_CONFIG
- Path to duply configuration file
- BACKUP_TEMP_DIR
- Temp directory, where database backups will be located. Database dumps will be deleted from file system after each backup session. Default value: '/var/backups/postgres'
Only PostgreSQL database backups are supported!
Duply configuration file
It is quite simple to configure duply. You can create duply initial config simply from command line::
duply <profile> create
Then look at ~/.duply/<profile>/conf and follow comments.
Moreover, you can use ours config template:
[duply] GPG_PW='**********' TARGET='s3+http://**********@com.mycompany.server/' SOURCE='/' MAX_AGE=1M MAX_FULL_BACKUPS=5 MAX_FULLBKP_AGE=1W VOLSIZE=50 DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE --volsize $VOLSIZE "
This template encrypts backups with GPG and uplaod to AmazonS3 bucket com.mycompany.server.
Pay attention to the TAGET option. Django-server-config will automatiocally add project_name to TARGET. E.g. rendered config will contain value:
TARGET = s3+http://**********@com.mycompany.server/<myproject>
Consider trailing slash in *.ini config, django-server-config adds only myproject without slash.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/redsolutioncms.django-server-config/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Hello again! In this post I will discuss my progress regarding the key-detection feature of the Hermes 3000 Upcycled Typewriter project. If you missed my first two posts you can view them here:
In my last post I showed how I attached loops of copper wire to the typewriter to act as part of a "normally-closed" switch (with the type-bars acting as the other part). I have now attached wires to these loops so they can be connected to the shift registers. Check out this picture, it looks like my typewriter has some funky hair!
You may notice a few things in the image above. First, the wires are different colors. There are 44 wires total so I used a different color for each group of 11 wires. Can you imagine 44+ wires of the same color running through the typewriter? It would be very difficult to trace out a specific wire when troubleshooting. Next, you may notice the wires have pins on them. I crimped pins on the ends of the wires so I could plug them into a breadboard easily. It was a tedious but necessary task. Lastly, you may not be able to see from the picture but those are silicone wires. Silicone wires are more flexible and flexibility is important when working with limited space (like inside a typewriter).
After I had the wires attached, I decided to work on the code for the MCU. In my previous post I explored programming the MCU to read data from one shift register. Now I need to expand that to 6 shift registers - which is a fairly straightforward task. Of course there is a difference between simply reading inputs and actually interpreting those inputs...
I realized that sometimes, when typing, I will accidentally hit multiple keys. For example, if I was typing an 'a', I might also slightly tap the 's' key. It won't be enough to lift the 's' type-bar up to the paper, but it's enough to lift it off the loop of copper wire briefly. The Edison needs to know how to distinguish between the real input (the 'a' in this case) and the false input (the 's' in this case). I'm thinking the best way to do this is based on timing. The Edison will see the real input being "on" for a longer time than the false input, so maybe I can use this to my advantage...
I have come up with the following algorithm. It is based on timing and involves two important values that I am calling framerate and checkrate. The framerate value is the number of milliseconds that elapse between reading the data from the shift registers. Every time the MCU reads the data from the shift registers, it stores this in an array which I am calling a "frame" (the idea is similar to the way a video camera rapidly takes pictures - also called frames). The checkrate value is the number of frames the MCU will read before trying to decode the data. So here's how it all works: every framerate milliseconds, the MCU shifts the data in from all 6 registers, stores it in a frame, and puts that frame into an array of frames. After it has read checkrate frames, it will move on to decode them. The way the frames are interpreted are by a simple AND procedure. If a key was held down during all the frames, then the corresponding bit in each frame will be a 1. If a key was accidentally pressed, then its corresponding bit should only be 1 in some frames, but 0 in others. So if I perform a bitwise AND on each frame in the array, I will be left with a "master frame" which should only have one bit that is 1. This bit would represent the key being pressed. I just need to experiment with the values of framerate and checkrate to determine the optimal values.
You can see this code below. Note that I have not implemented the "decode" function yet - but that should be simple.
#include "mcu_api.h" #include "mcu_errno.h" //FRAMERATE: 20ms //CHECKRATE: load 5 frames before checking inputs (total 100ms) enum {CHECKRATE = 5, FRAMERATE = 20}; //Data pins are the serial data coming from //shift registers. In order, IO2 = IO7. int datapins[6] = {128, 12, 129, 13, 182, 48}; //Control signals for shift registers. int SH_LD = 49; //IO8 int CLK_INH = 183; //IO9 int CLK = 41; //IO10 //For receiving a "command" from the host system. //Right now there are no commands unsigned char cmd[255]; unsigned char result[1] = ""; char frames[CHECKRATE][6]; int i, j; //loop variables /* * See datasheet for proper operation of shift * register (74HC165). */ void shift_in(int frame){ //Load into register gpio_write(SH_LD, 0); gpio_write(SH_LD, 1); //Turn off clock inhibit gpio_write(CLK_INH, 0); //Get data from register for(i = 0; i < 8; i++){ for(j = 0; j < 6; j++){ //Shift data in frames[frame][j] <<= 1; frames[frame][j] |= gpio_read(datapins[j]); //Cycle the clock gpio_write(CLK, 1); gpio_write(CLK, 0); } } //Turn clock inhibit back on gpio_write(CLK_INH, 1); } unsigned char decode(char frame[]){ /* * Still need to implement this. * Basically, maps the data to a specific character */ } void mcu_main() { for (i = 0; i < 6; i++){ gpio_setup(datapins[i], 0); } gpio_setup(SH_LD, 1); gpio_setup(CLK_INH, 1); gpio_setup(CLK, 1); gpio_write(CLK_INH, 1); gpio_write(CLK, 1); gpio_write(SH_LD, 0); unsigned long time = time_ms(); int frame_count = 0; char master_frame[6]; while (1) { //Every FRAMERATE ms we want to read a new frame //until we have read CHECKRATE frames. Then we will //decode the input and send it to the CPU. if(frame_count == CHECKRATE){ frame_count = 0; for(i = 0; i < CHECKRATE; i++){ for(j = 0; j < 6; j++){ master_frame[j] = master_frame[j] & frames[i][j]; } } } if(time_ms() >= time + FRAMERATE){ //enough time has passed, check frame time = time_ms(); if(frame_count < CHECKRATE){ //not enough frames checked yet, check frame shift_in(frame_count); frame_count++; } else { } } } }
Note that I haven't tested this code yet - so if you see any glaring problems please let me know! While I'm talking about testing, I should mention another aspect of this project. You see, when I finally test this system and I don't get the results I expect, how will I know where the problem is? It could be that the code is wrong or maybe the typewriter isn't wired correctly. So I decided to design a testing board. Basically it's a bunch of LEDs - 44 to be exact - which will turn on when a key is pressed. This way I can see what the Edison would "see". You can see this testing board in use below.
The board is pretty simple. Basically, the wire that would go to the Edison now goes to the base of an NPN transistor. The transistor controls an LED. See how the LEDs are color-coordinated with wires? Pretty smart, eh? And they are in order - the 3rd blue LED corresponds to the 3rd blue wire on the typewriter. The whole thing is powered by a 12V lead-acid battery connected to a 5V linear voltage regulator. So now when I press a key, a light will turn on! This board took a while to set up but it has already revealed some very useful information. First, some of the type-bars weren't resting on the copper loops very well. I have adjusted some of them to fix the problem but some others will require more work. Second, I realized that while I'm typing, lots of the LEDs flicker! I believe that this is caused by the constant vibration that the typewriter feels while someone is typing. This vibration is enough to cause the type-bars to break the connection with the wires momentarily. The Edison will read this as me hitting many keys at once, but I believe that my code will be able to distinguish between the real key press and the vibrations causing momentary "glitches".
So that's where I am now! Here's what I need to do next:
- Fix the copper loops to ensure good connectivity
- Finish the code and test it!
- Wire up the keys that don't use shift registers (like "shift")
Stay tuned next week for the continuation of this project! Let me know if you have any questions, comments, concerns, or criticisms.
P.S.: I would like to add a special thanks to a youtuber - TypewriterJustice. Previously, I had removed the carriage of the typewriter (the part that holds the paper and moves left/right as you type). Well it turns out that only typewriter experts should do this! I tried so many times but could not re-install the carriage . So in my desperation, I sent a message to TypewriterJustice asking him if he had any tips. He responded to me very quickly and with his advice (and a little luck), I managed to get it back on! Thanks TypewriterJustice!
|
https://www.element14.com/community/community/design-challenges/upcycleit/blog/2017/04/21/upcycle-it-hermes-3000-post-3
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
my code should prompt the user to enter a string and a character, and tell where the character is located
for instance
"Welcome" and "e"
returns
"2, 7"
How can my code be fixed? Code is here. Thanks in advance (this is not homework, but some hint could be useful anyway if you don't want to post a solution).
import java.util.Scanner;
public class Test {
public static void main(String[] args) {
System.out.println("Please enter a string and a character");
Scanner input = new Scanner(System.in);
String s = input.nextLine();
char ch = input.next().charAt(0);
System.out.println(count(ch));
}
public static int count (String s, char a) {
int count = 0;
for (int i = 0; i < s.length(); i++) {
if (s.charAt(i) == a) {
count++;
}
}
return count;
}
}
Some mistakes:
Your code doesn't compile. Call:
System.out.println(count(s, ch));
instead of
System.out.println(count(ch));
You count the number of appearances. Instead, you should keep the indexes. You can use a
String or you can add them to a list / array and convert it later to what you want.
public static String count(String s, char a) { String result = ""; for (int i = 0; i < s.length(); i++) { if (s.charAt(i) == a) { result += (i+1) + ", "; } } return result.substring(0, result.length() - 2); }
I used
i+1 instead of
i because the indexes start at
0 in Java.
I also returned the string
result.substring(0, result.length() - 2) without its last 2 characters, because I added
, after every character.
|
https://codedump.io/share/J1pQi84VVWVh/1/java-method-to-find-the-occurencies-of-a-certain-character
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I need to build up a counting function starting from a dictionary. The dictionary is a classical Bag_of_Words and looks like as follows:
D={'the':5, 'pow':2, 'poo':2, 'row':2, 'bub':1, 'bob':1}
ID={5:1, 2:3, 1:2}
values=list(ID.keys())
values.sort(reverse=True)
Lk=[]
Nw=0
for val in values:
Nw=Nw+ID[val]
Lk.append([Nw, val])
you can create a sorted array of your word counts then find the insertion point with
np.searchsorted to get how many items are to either side of it... np.searchsorted is very efficient and fast. If your dictionary doesn't change often this call is basically free compared to other methods
import numpy as np def F(n, D): #creating the array each time would be slow if it doesn't change move this #outside the function arr = np.array(D.values()) arr.sort() L = len(arr) return L - np.searchsorted(arr, n) #this line does all the work...
what's going on....
first we take just the word counts (and convert to a sorted array)...
D = {"I'm": 12, "pretty": 3, "sure":12, "the": 45, "Donald": 12, "is": 3, "on": 90, "crack": 11} vals = np.arrau(D.values()) #vals = array([90, 12, 12, 3, 11, 12, 45, 3]) vals.sort() #vals = array([ 3, 3, 11, 12, 12, 12, 45, 90])
then if we want to know how many values are greater than or equal to
n, we simply find the length of the list beyond the first number greater than or equal to
n. We do this by determining the leftmost index where
n would be inserted (insertion sort) and subtracting that from the total number of positions (
len)
# how many are >= 10? # insertion point for value of 10.. # # | index: 2 # v # array([ 3, 3, 11, 12, 12, 12, 45, 90]) #find how many elements there are #len(arr) = 8 #subtract.. 2-8 = 6 elements that are >= 10
|
https://codedump.io/share/eg9wwCTjPUSd/1/building-up-a-counting-function
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
csConfigAccess Class Reference
This is a simple convenience class that can be used to deal with the sytem config manager. More...
#include <csutil/cfgacc.h>
Detailed Description
This
access to system configuration
access to system configuration
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/new0/classcsConfigAccess.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
This document will help you understand how the Real-Time Garbage Collector (RTGC) works,
focusing on the most important configuration parameters and use cases.
Readme File: Links to all the Java RTS technical documents
Introduction
Normal Mode: Recycling Memory Without Disrupting Any Real-Time Threads
Advanced Tuning in Normal Mode: Tuning the Normal Priority of the RTGC
Expert Tuning in Normal Mode: Tuning the Start Time in Normal Mode for the RTGC
Expert Tuning in Boosted Mode: Tuning the Boost Time
Sun Java Real-Time System
2.1 (Java RTS) supports two garbage collectors:
The
other garbage collectors featured in the Java SE version of the
HotSpot virtual machine are not supported.
The default garbage collector is the new real-time garbage collector.
This RTGC can be configured to execute in a deterministic manner.
The RTGC might exhibit lower throughput than non-real-time
collectors, particularly on uniprocessors.
For applications that are more sensitive to the collector's throughput
than to the pause times caused by the collector's execution,
the RTGC can be turned off with the -XX:-UseRTGC option.
In this case, the non-real-time, serial collector is used,
and all threads except NoHeapRealtimeThreads might suffer from pause
times caused by garbage collection.
See also the
Practical Introduction to Achieving Determinism
document for a detailed description
of example programs that allow you to begin using Java RTS, including
the RTGC, quickly and easily.
The RTGC's activity can be configured with a set of special parameters.
To guarantee determinism, some minimum configuration is necessary. Further tuning can be performed
by trying different values of various parameters, in various
combinations. The parameters that you might configure depend
upon your level of expertise with the RTGC:
basic, advanced,
or expert.
All the parameters are listed in the tables at the end of this document,
in the section Command-Line Options.
Any real-time garbage collector is expected to recycle memory
without disrupting the temporal determinism of time-critical tasks. In
addition, a real-time GC must recycle memory fast enough so that
these tasks are not blocked if the memory is exhausted
when they attempt to allocate more.
This means that a real-time GC must be granted enough processor
cycles to complete its work on time. On the other hand, granting it too many
processor cycles reduces the application's overall throughput.
Therefore, if application throughput is a concern, users must
properly configure real-time GCs.
Work-based real-time collectors execute when allocations are
performed, regardless of the
priority of the threads that are executing. Their configuration defines
how much time is granted to the GC for each allocation.
Time-based real-time collectors execute at specified times,
and at top priority. Their configuration defines when
and how often the GC should run. For instance these real-time GCs can be
scheduled as periodic threads.
However, most of the available real-time GCs do not allow
threads to preempt the GC. The real-time application could be
frozen for a considerable amount of time. The better the real-time GC, the smaller
this time can get. Unfortunately, this approach does not scale well on
multiprocessors.
Another frequent drawback of real-time GCs is that, to ensure
determinism, users must analyze the memory behavior of all the
threads of their applications. Adding a new low-priority thread that
allocates a lot of memory might force the user to change all the
configuration parameters of the real-time GC. It might even result in an
application for which the GC cannot ensure determinism,
requiring the addition of more memory or CPU resources. For instance,
this prevents the use of these real-time GCs for a server on which servlets are
dynamically added and removed, unless the server has enough
memory and CPUs.
Sun's Java RTS and its Real-Time Garbage Collector (RTGC) avoid these two pitfalls.
[Contents]
The important point about the RTGC provided with Java RTS is
that it is fully concurrent, and thus can be preempted at any time.
There is no need to run the RTGC at the highest priority, and there
is no stop-the-world phase where all the application's threads
are suspended during the RTGC execution.
On a multiprocessor, one CPU can be doing some GC work while an
application thread is making some progress on another CPU.
Thus, the RTGC offered by Java RTS is very flexible.
While other real-time GCs RTGC considers that the criticality of
an application's threads is based on their priority and ensures hard
real-time behavior only for real-time threads at the critical level,
while trying to offer soft real-time behavior for real-time threads
below the critical level.
This reduces the total overhead of the RTGC and ensures that the
determinism is not impacted by the addition of new low-priority
application threads.
In addition, this makes the configuration easier because there is no need
to study the allocation behavior of an application in its entirety
in order to configure the RTGC. Determinism is designed by looking only
at the critical tasks.
Finally, the RTGC does not use heap generations, and therefore each
RTGC run recycles the entire heap.
By setting only two parameters, namely a memory threshold and a
priority, you can ensure that threads running at critical priority
will not be disrupted by garbage collection pauses. The big advantage
of this approach is that
these parameters are independent of the non-critical part of the
application. You do not need to reconfigure the RTGC when you add a new
non-critical component or when the machine load changes.
The RTGC tries to recycle memory
fast enough for the non-critical real-time threads, but without
offering any guarantees for them. If the non-critical load increases, the RTGC might
fail to recycle memory fast enough for all the threads. However, this will not
disrupt the critical threads, as long as the memory threshold is correctly
set for the application. Only the non-critical real-time threads
will be blocked and temporarily suffer from some jitter,
due to memory recycling.
The RTGC has an auto-tuning mechanism that tries to find the
best balance between determinism and throughput. Expert users can configure
a few parameters to control this auto-tuning in order to improve this balance.
The function of the RTGC is based on the criticality of the
application threads. Critical threads are those that must
execute within well-defined time limits so that their response times
are predictable (that is, deterministic), whereas
non-critical threads do not have these constraints. Java
RTS real-time threads can be critical or non-critical, whereas non-real-time
threads (java.lang.Thread instances) are non-critical by definition.
java.lang.Thread
The RTGC starts running at its normal priority, and its priority
can be boosted to a higher level if memory falls below a certain level.
(This is explained in the next section.) Both of these priority values
are configurable with command-line options.
The critical boundary is the priority boundary between critical
and non-critical threads. By default, the critical boundary is the same
value as the boosted RTGC priority, but this also is configurable.
The thread types and their priorities are summarized as follows:
No-heap real-time threads. These threads are
instances of the Java RTS class javax.realtime.NoHeapRealtimeThread
(NHRT), and they exhibit "hard" real-time behavior.
These threads are critical by definition and
must execute within defined time limits (a few tens of microseconds).
By definition, these threads do not allocate memory from the
heap and therefore do not depend on garbage collection completion
in order to run properly.
It is the developer's responsibility to ensure that the priority
of the NHRTs is above the RTGC boosted priority
so that they are not preempted by the RTGC.
(The RTGC boosted priority is specified with the
RTGCBoostedPriority parameter.)
We recommend running the NHRTs at a priority higher than the
priority of the critical real-time threads.
Keep in mind that these threads should not use all the CPU,
because that would prevent the RTGC from recycling memory that
might be needed for other threads in the application.
javax.realtime.NoHeapRealtimeThread
RTGCBoostedPriority
Critical real-time threads. These threads are
instances of the Java RTS class javax.realtime.RealtimeThread (RTT),
and they exhibit "hard" real-time behavior.
By definition, their priority is above the critical boundary
and the RTGC boosted priority, and therefore they are not preempted by the RTGC.
These threads execute within defined time limits (a few hundreds of microseconds), provided that the
RTGC is correctly configured. These threads are not blocked on memory allocation, thanks to
the memory reserved for them with the RTGCCriticalReservedBytes parameter.
However, these threads must not use all the CPU,
because that would prevent the RTGC from recycling memory,
resulting in memory exhaustion that would cause non-critical threads
to block on allocation.
javax.realtime.RealtimeThread
Non-critical real-time threads. These threads are also instances of the
javax.realtime.RealtimeThread class (RTT), but they exhibit "soft"
real-time behavior.
By definition, their priority is below the critical boundary.
The non-critical threads do not necessarily have to execute
within defined time limits. Based on their priorities relative to the RTGC priorities,
these threads can be considered as high-importance, medium-importance, or low-importance.
By default, the priority levels of these threads are below the RTGC boosted priority.
If the critical boundary is configured to be above the RTGC
boosted priority, then some non-critical threads could have a priority
above the RTGC boosted priority, thereby allowing them to preempt the RTGC
at its boosted priority. These are considered to be high-importance non-critical threads.
However, since they are still non-critical threads,
they would be blocked on memory allocation when the RTGC goes into
its deterministic mode, as explained in the next section.
The threads with priority below the RTGC boosted priority but above the
RTGC normal priority are considered to be medium-importance non-critical threads.
In addition, the normal priority of the RTGC can be set high
enough to allow the RTGC (running at this normal priority) to preempt
the low-importance non-critical threads, if necessary.
Non-real-time threads. These are instances of the
java.lang.Thread class (JLT) and are non-critical by definition.
In all cases, it is the programmer's responsibility to set the
correct priority to reflect the level of criticality of any thread
in the Java RTS application. Note that this is the base priority
of the thread. Threads that share locks with more critical threads
can be automatically boosted from time to time, via priority inheritance,
to a higher priority, and the RTGC will take into consideration the
change to this higher priority.
The figure below shows the priority levels for the RTGC and
for the different thread types.
Figure 1 Priority Levels for RTGC and Threads
By default, the "normal" priority of the RTGC is above that of
non-real-time threads (JLTs), but at the lowest priority for
real-time threads. When the Java RTS
VM determines that memory is getting low, the RTGC priority is boosted to
a higher level, but still lower than that of the critical threads.
Note: The figure also shows the priority level of the
NHRT threads,
but these threads do not depend on garbage collection activity since
they do not allocate memory from the heap.
The most important property for the RTGC is the priority at
which it runs. Our real-time garbage collector dynamically changes
its priority to try to balance throughput and determinism.
For the parallel (multi-processor) version of the RTGC, the number of
threads supporting RTGC activity is
also important. See the section Improving
the Determinism by Using Multiprocessors.
The RTGC can function in three different modes during a
garbage-collection run, based on the remaining free memory available:
Normal mode.
When free memory goes below the startup memory threshold,
the RTGC runs in normal mode, and at its normal priority.
Boosted mode.
When free memory goes below the boosted memory threshold, the RTGC
priority is boosted to a higher level. This is called boosted mode.
Deterministic mode. When free
memory goes below the critical reserved bytes threshold, the RTGC runs at its boosted
priority level. In addition, memory allocation is blocked for non-critical threads
(priority below the critical boundary). This is deterministic mode.
The figure below shows how the RTGC is scheduled
(with one CPU), based on free memory thresholds.
Figure 2 RTGC Scheduling on One CPU
The RTGC starts its next run at its initial (normal) priority
when free memory goes below the startup memory threshold.
This threshold is calculated from a set of parameters (which the expert user can tune).
One of the parameters used in the calculation of this threshold is
NormalMinFreeBytes. Lower-priority
threads are preempted by higher-priority threads. Since the RTGC is running at
a lower priority, it can be preempted by non-critical real-time
threads. This is called the normal mode.
NormalMinFreeBytes
The Java RTS VM calculates a boosted memory threshold,
based on another set of parameters (which the expert user can tune).
One of the parameters used in the calculation
of this threshold is BoostedMinFreeBytes.
When memory reaches this threshold, the RTGC is boosted to a
higher priority (specified with the RTGCBoostedPriority
parameter), meaning that fewer threads can preempt the RTGC.
This is called the boosted mode.
BoostedMinFreeBytes
Not shown in the figure above is a further implication of the fine tuning
of the RTGC.
If the boosted RTGC value is equal to the
critical boundary value, then only critical threads can preempt the RTGC.
However, the critical boundary could be configured to be higher than the boosted RTGC
priority. In this case, threads with priority below the critical boundary but higher than
the RTGC boosted priority, despite being non-critical threads,
could possibly preempt the RTGC.
If the free memory falls below the critical reserved bytes threshold
(specified by the RTGCCriticalReservedBytes
parameter), the non-critical threads are blocked on allocation,
waiting for the RTGC to recycle memory. This guarantees that
the critical RT threads, and only the critical RT threads,
will be able to allocate memory from the reserved amount.
This mode is called the deterministic mode,
as it assures determinism for the critical threads.
If the free memory remains
below RTGCCriticalReservedBytes, the non-critical threads
will eventually fail with an OutOfMemoryError.
RTGCCriticalReservedBytes
OutOfMemoryError
The figure also shows the priority level of the NHRT threads,
but these threads are not involved in garbage collection since they do not
allocate memory from the heap.
The principle behind the RTGC is the balance between
determinism and throughput. With a finite amount of system resources (CPU time and
memory in particular), Java RTS must ensure
determinism for the critical threads, while ensuring that
the other threads also are able to execute in a timely manner.
Therefore, the RTGC must recycle enough memory for the allocation requests,
while not consuming all the CPU cycles. This delicate balance is
configured with several parameters. Since the RTGC continuously
tunes its own operation, only two of these parameters need to
concern the basic user:
You must configure the RTGCCriticalReservedBytes
parameter in order to guarantee determinism.
The default value for this parameter is zero, which would reserve no memory at all
for the critical threads when the RTGC is boosted to its higher
priority. This is the only parameter that you are required to configure.
The RTGCCriticalBoundary parameter is set to a
default value, but you can configure this value in relation
to the priority values for the other threads in your application.
The remaining parameters should be used in advanced tuning by
the advanced or expert user.
Note: The default behavior of Java RTS is to allow the RTGC to use all the
CPU power, because determinism is our primary goal.
However, with a bad configuration or with applications that
have a large percentage of reachable objects, the RTGC might run
continuously to try to ensure determinism. Therefore, by default Java RTS
forces a wait period between two consecutive RTGC runs in order to
help prevent the system from being blocked due to continuous garbage collection.
The expert user can configure this wait period with the RTGCWaitDuration parameter.
All the parameters are listed in the tables at the end of this document
in the section Command-Line Options.
The RTGC auto-tunes the parameters that control the "normal" mode of
its functionality. Therefore, the basic user does not need to tune
these parameters. The advanced user might need to tune the
value of the RTGCNormalPriority parameter.
Only expert users should attempt to tune the parameters
used to determine when the RTGC starts its next run.
In the RTSJ, the impact of the GC on real-time threads is
implementation-dependent. With Java RTS, the threads running at
the highest priorities can completely avoid garbage collection pauses.
However, the RTGC must start soon enough
to ensure that it completes before memory is exhausted. This works as long
as the allocation behavior is relatively stable, because the RTGC must
be started soon enough to handle the worst case allocation.
In normal mode, the RTGC runs at a lower priority than the
real-time threads. Unfortunately, this does not scale well to larger
applications. As a general rule, the more
real-time threads are running (and allocating memory), the longer an
RTGC cycle lasts.
The amount of memory allocated during a cycle can quickly increase.
To ensure determinism, we must take into account the worst case allocation
behavior, which could possibly cause the RTGC to run continuously.
And even this might not be sufficient to guarantee that memory
would be recycled fast enough.
However, real-world applications normally contain a mix of
critical and non-critical tasks.
For some of the non-critical tasks, timing is important but not
critical. You might be willing to miss a few deadlines when there
is an allocation burst. The gain is that this allows the RTGC to start
much later, improving the global throughput.
With Java RTS, we can configure the behavior of the
non-critical threads independently from the behavior of critical ones.
We consider that the normal behavior for real-time threads is to run
at a higher priority than the RTGC, while the JLTs run at
a lower priority than the RTGC.
The RTGC starts running at RTGCNormalPriority
(which defaults to the minimal real-time
priority). The startup memory threshold determines when the
RTGC starts. The idea is to start it early enough so that it completes
its cycle before reaching the boosted memory threshold,
which would increase the RTGC priority to its boosted level,
creating jitter for the non-critical real-time threads. The
auto-tuning mechanism
takes into account the allocation performed during the last RTGC
cycles to try to start the RTGC just on time, thus maximizing the
throughput while avoiding pause times, assuming that the allocation
behavior is stable.
Advanced
Tuning in Normal Mode: Tuning the Normal Priority of the RTGC.
Expert
Tuning in Normal Mode: Tuning the Start Time in Normal Mode for the RTGC
The startup memory threshold determines when the RTGC starts
at its normal priority. This threshold is calculated by an auto-tuning
mechanism. (Only expert users should attempt to tune this threshold.)
The auto-tuning mechanism uses the following parameters for the normal mode:
NormalMinFreeBytes
NormalSlideFactor
NormalSafetyMargin.
The startup memory threshold determines when the RTGC starts
at its normal priority. This threshold is calculated by an auto-tuning
mechanism. (Only expert users should attempt to tune this threshold.)
The auto-tuning mechanism uses the following parameters for the normal mode:.
When the RTGC priority is boosted,
non-critical threads can be preempted by RTGC threads,
but the RTGC can be preempted by critical threads.
Therefore this can cause non-critical threads to pause
for a long time, unless in a multiprocessor environment.
See the section Improving
the Determinism by Using Multiprocessors.
You can specify the value of the boosted priority with the
RTGCBoostedPriority parameter.
Expert
Tuning in Boosted Mode: Tuning the Boost Time
The boosted memory threshold determines when the RTGC priority
is boosted to a higher level. This threshold is calculated by an auto-tuning
mechanism. (Only expert users should attempt to tune this threshold.)
The auto-tuning mechanism uses the following parameters for the boosted mode:
BoostedMinFreeBytes
BoostedSlideFactor
BoostedSafetyMargin boosted memory threshold determines when the RTGC priority
is boosted to a higher level. This threshold is calculated by an auto-tuning
mechanism. (Only expert users should attempt to tune this threshold.)
The auto-tuning mechanism uses the following parameters for the boosted mode: real-time threads that run at
a priority higher than the critical boundary are
called the critical real-time threads. The RTGC must ensure
that their worst case pause time is very low (at worst a few hundreds
of microseconds).
To guarantee this determinism, you must specify the RTGCCriticalReservedBytes
parameter. You can tune the critical boundary with the RTGCCriticalBoundary parameter.
RTGCCriticalBoundary
When the free memory goes
below the limit specified by RTGCCriticalReservedBytes,
Java RTS blocks the non-critical threads from allocating memory
to prevent them from disrupting the critical threads. If the free memory remains below this threashold
after a few RTGC runs, then OutOfMemoryError conditions might start being
thrown for these non-critical threads.
Note that the critical threads continue running at a priority higher than the RTGC
and can preempt it at any time.
If RTGCCriticalReservedBytes is too low, a critical thread
might block when the memory is full, waiting for the RTGC to free some
memory. If RTGCCriticalReservedBytes is too high, the RTGC
will run more often, preventing the lower priority threads from
running. Hence, this reduces their global throughput.
The RTGC is fully concurrent, that is,
application threads can run concurrently with the RTGC.
Therefore, we can improve the determinism of non-critical threads on
multiprocessors by specifying how many worker threads the RTGC can use.
This is both simpler and safer than depending on parameter tuning alone.
The drawback of parameter tuning alone is that when the RTGC
parameters are underestimated, all non-time-critical threads might be suspended
from time to time. If they are overestimated, the JLTs
will be suspended very often. At worst, they will not make any
progress because the RTGC could consume all the CPU not used by the
real-time threads.
You can specify the initial (normal) and boosted number of parallel worker threads
for the RTGC with the RTGCNormalWorkers and RTGCBoostedWorkers
options, respectively. Thus, when the RTGC
runs, there should still be some CPU cycles not used
by the RTGC and the critical threads. (It is assumed that critical
threads use only a small part of the CPU power.) Hence, non-critical real-time threads
should still be able to make some progress. Even JLTs could get
some CPU cycles.
When the real-time load
is low, RTGCNormalWorkers threads executing the RTGC at RTGCNormalPriority
will be enough to cope with allocation
performed on the remaining processors. However, if the allocation rate
increases or if more real-time threads are running, the RTGC at RTGCNormalPriority, even if
running continuously, might not recycle
memory fast enough. When this happens, the RTGC priority is boosted,
and the number of RTGC threads is increased to RTGCBoostedWorkers.
On multiprocessors, as on uniprocessors, the VM performs auto-tuning to try to maximize
throughput. It will first try to complete the RTGC on time by using RTGCNormalWorkers
threads running at RTGCNormalPriority.
Expert users can balance determinism and throughput by using the
NormalSlideFactor and NormalSafetyMargin parameters.
However, an underestimation has a limited impact because, even if the RTGC is
boosted to RTGCBoostedPriority, the non-critical threads still get some CPU power.
Expert users should focus on the boosted memory threshold,
which defines when the RTGC threads start running at RTGCBoostedPriority.
This is done through BoostedSlideFactor, BoostedSafetyMargin,
and BoostedMinFreeBytes.
(See the section Description of the Auto-Tuning Mechanism.)
The big advantage to this configuration is that
overestimating the threshold (for example, small slide factors and big
safety margins) is not dangerous. Even if this configuration makes the RTGC run
continuously on RTGCBoostedWorkers processors, there will always be a
few processors available for the lower priority threads. The impact
on determinism depends on the ratio between RTGCBoostedWorkers and
the total number of processors.
RTGCBoostedWorkers
The following example shows a simple configuration to easily achieve determinism
(though without optimizing throughput) by limiting the number of processors the RTGC
can use at any one time and forcing the RTGC to run continuously.
Since the RTGC usually needs about 25% of the CPU
power in order to guarantee determinism, we allow the RTGC to use two processors
(parallel worker threads), assuming an 8-processor system.
We also force garbage collection to run almost continuously by
specifying the NormalMinFreeBytes option to be equal to the total heap size.
(As explained in the section Description of the Auto-Tuning Mechanism,
the NormalMinFreeBytes option specifies the amount of free memory below which the RTGC starts
its next cycle.)
-Xms1G -Xmx1G
-XX:NormalMinFreeBytes=1G
-XX:RTGCNormalWorkers=2
When making estimations, consider the following:
If you underestimate the boosted number of worker threads, that is, if
RTGCBoostedWorkers is too low, the RTGC might not
recycle memory fast enough, even if it runs continuously at RTGCBoostedPriority.
This means that the memory will often
decrease down to RTGCCriticalReservedBytes and that the
non-critical threads will block.
If you overestimate the boosted number of worker threads, you have to
be more careful with the BoostedSlideFactor and BoostedSafetyMargin
because the RTGC could be granted
too many CPU cycles, reducing the global throughput.
If you assign values to the boosting
parameters that are too low, this will cause the RTGC to be boosted too late and the
remaining processors will become idle when the memory reaches RTGCCriticalReservedBytes.
Note that this is a temporary
problem that happens only when the allocation behavior changes and
it would only slightly impact the overall throughput.
Setting RTGCBoostedWorkers to the total number of processors
prevents processors from sitting idle in deterministic mode, but causes
the boosted mode to be more disruptive to non-critical threads, as stated above.
As a summary, this expert tuning is a parameterized way to offer
more determinism for the real-time non-critical threads. In addition, it
does not block the machine, avoiding the case where the RTGC runs
continuously on all the processors, at very high priority. However, there is
no free lunch. This automatic gain in determinism can quickly lead to
throughput issues.
This section describes in detail the auto-tuning mechanism
that Java RTS uses to determine the optimal conditions to start the RTGC
or to boost its priority. These conditions are
referred to as the startup memory threshold and the boosted
memory threshold, respectively. Expert users might want to
try to tune these threshold values, using specific parameters.
Note: This same mechanism is used to determine both the
startup memory
threshold and the boosted memory threshold; the only difference is
that the parameter names begin with Normal and Boosted,
respectively.
Therefore, we drop Normal and Boosted
from the parameter names used in the calculations below.
The VM does not try to use the worst case allocation behavior
to ensure the RTGC will complete on time, because this would run the RTGC too
often after the first period of allocation bursts. Hence, the auto-tuning
mechanism uses a depreciation mechanism to compute a "sliding value" at the end of each
GC cycle. This sliding value is the higher of its depreciated
previous value and the memory that was allocated during the current GC
cycle. A safety margin is then applied to this sliding value to
compute the next memory threshold. The RTGC will start (or be boosted
to its higher priority) when the free memory goes below that threshold.
The RTGC uses the amount of memory that was allocated previously
in order to estimate the next threshold. It is expected that this value
will "slide" to a stable amount. (This is why we call it a "sliding value.")
The following formulas are used to calculate the startup
memory threshold and the boosted memory threshold:
SlidingValuenew = MAX(current,
(SlidingValueprevious *
(100 - SlideFactor) / 100))
Threshold = MAX(MinFreeBytes, (SlidingValuenew * (100 +
SafetyMargin) / 100))
SlidingValuenew = MAX(current,
(SlidingValueprevious *
(100 - SlideFactor) / 100))
Threshold = MAX(MinFreeBytes, (SlidingValuenew * (100 +
SafetyMargin) / 100))
This calculation is performed at the end each RTGC cycle.
In the formulas, the variables have the following meanings:
current represents the amount of memory
allocated during the current GC cycle.
current
SlidingValue represents a prediction of the
amount of memory that will be allocated in the next GC cycle.
SlidingValue
SlideFactor is an integer value converted to a
percentage by which we depreciate
the previous sliding value, that is, the sliding value calculated
during the previous GC cycle.
SlideFactor
SafetyMargin is an integer value converted to a
percentage which we add to
the current sliding value, in order to reduce unnecessary
boosts for small variations in the memory level. For example,
in the calculation for the startup memory threshold, if the value
of NormalSafetyMargin is not big enough, the RTGC can
start too late and have to
be boosted later, causing jitter for the soft real-time threads.
SafetyMargin
NormalSafetyMargin
MinFreeBytes is the user-estimated amount of free memory
below which the RTGC should always start its next run (or be boosted).
(This is the parameter NormalMinFreeBytes or
BoostedMinFreeBytes.) This user-estimated value
is used in calculating the threshold.
MinFreeBytes
Threshold is the final value for the startup
memory threshold or the boosted memory threshold, that is, the
free memory threshold below which the
RTGC will start its next run or be boosted to its higher priority.
Threshold
The expert user can
configure SlideFactor, SafetyMargin, and MinFreeBytes.
The first formula above uses the sliding value calculated
during the previous GC cycle.
For the initial cycle of the RTGC, the sliding value is zero,
as there was no previous GC cycle.
With the slide factor applied, the sliding value is still zero.
Therefore, the amount of memory allocated during this first
cycle becomes the new sliding value.
For subsequent GC cycles,
the sliding value calculated during the previous GC cycle is
depreciated by the slide factor percentage.
This result is compared with the
amount of memory allocated during the current GC cycle, and the
higher of the two becomes the new sliding value, which
represents the amount of memory we predict will be
allocated during the next cycle.
Then, in the second formula above, this new sliding value,
with the safety margin percentage applied, is compared with the
user-estimated free memory threshold (MinFreeBytes),
and the higher of the two becomes the
memory threshold that will actually be used to determine startup or
priority boosting of the RTGC.
The slide factor represents the percentage of memory allocated
during the previous cycles that will not be considered in the calculation
of the prediction of the allocation needs during the next garbage-collection cycle.
In other words,
this can be considered as the speed with which allocation bursts
are forgotten.
As an example, let's say that you have set the SlideFactor parameter
to 40. In this case, at the end of each GC cycle, the sliding value is reduced by 40%
before being compared to the memory allocated during the cycle.
If a large amount of memory is allocated during the cycle, only 60%
of that amount will be used to determine when to start the
next cycle or boost the priority.
If the slide factor is increased to 80%, then only 20% of the allocation
is used in the calculation, and the effect of the allocation burst is more quickly
"forgotten." Therefore, by increasing the SlideFactor parameter, you ensure
that the RTGC will not continue consuming a lot of CPU cycles after a
period of allocation bursts.
As mentioned above, the sliding value is initially zero. It also goes
back to zero if no allocations are performed during a few GC cycles.
Note that, if the sliding value is zero for the calculation of the
memory thresholds (normal or boosted), the RTGC
could start or be boosted too late and cause jitter for the first few cycles (the
time needed for the application to reach a steady state). If you are
concerned with this "initial learning phase," you can specify a
MinFreeBytes threshold, which should roughly
correspond to the average behavior of the application.
The SafetyMargin parameter is a percentage of
the calculated sliding value that is added to the value before we
compare it to the free memory threshold. In this way, the RTGC
ignores small variations in the level of memory allocations.
This represents an estimation of the variation of allocation rate
that can be supported without creating unacceptable jitter.
Decreasing NormalSafetyMargin or NormalMinFreeBytes
improves throughput by starting the
RTGC later. Unfortunately, the RTGC might start too late, particularly
at application startup or when the allocation rate increases.
In this case the RTGC priority would be boosted,
allowing the RTGC to preempt the non-critical real-time threads.
Increasing NormalSafetyMargin or NormalMinFreeBytes
avoids this situation.
Note that if NormalSafetyMargin is too big, the
RTGC might run continuously, and threads at
a priority lower than RTGCNormalPriority might be prevented
from running. This might also happen if the real-time threads often
have allocation bursts and the NormalSlideFactor is too low.
Increasing NormalSlideFactor allows the RTGC to forget more quickly
the allocation bursts, improving the throughput but increasing the
likelihood of jitter during the next allocation burst.
For boosted mode, these tuning options are mainly useful in a multiprocessing enviroment.
See the section Improving the Determinism by Using Multiprocessors.
The VM auto-tunes the memory
thresholds to try to start or boost the RTGC as late as possible (to
maximize throughput) while trying not to have to increase its priority or block
the non-critical threads (which impacts their determinism).
The RTGC can go through three different phases.
By monitoring how much memory is allocated when the RTGC is
running at RTGCNormalPriority, the VM tries to start it on
time to complete at that priority using the number of processors
specified by RTGCNormalWorkers. By default, the RTGC uses all these processors at
priority 11, blocking the JLTs.
RTGCNormalWorkers
If it fails to complete on time, the RTGC is boosted to RTGCBoostedPriority,
and the non-critical threads suffer
from jitter. If RTGCBoostedWorkers is set to a value lower than
the total number of processors, then the highest priority
non-critical threads might be able to use the remaining processors to make
some progress. Otherwise, the RTGC tries to use all the processors.
The VM tries to boost the RTGC so that it should
complete before reaching RTGCCriticalReservedBytes. This
is also done by monitoring how many bytes are allocated while running.
If the RTGC does not complete fast enough, the RTGCCriticalReservedBytes
memory threshold is hit. In
that case, the priority of the RTGC is not modified but the RTGC prevents
the non-critical threads from allocating additional bytes. If memory
is completely exhausted, a critical thread will
block when it tries to allocate memory.
You must ensure that this amount of memory will be sufficient for the
critical threads.
As mentioned above, in order to guarantee determinism,
you must configure the RTGCCriticalReservedBytes parameter.
This is the only parameter that you are required to configure.
You might also want to configure the RTGCCriticalBoundary parameter.
For the rest, Java RTS includes several auto-tuning mechanisms to try to
lighten the configuration burden, as detailed in Description of the Auto-Tuning Mechanism.
Java RTS provides a garbage collector API and a few
MBeans to gather additional information. See Using
MBeans to Monitor and Configure the RTGC.
However, the simplest way to
figure what is happening is to use the -XX:+PrintGC command-line option
(and optionally -Xloggc:<filename> to redirect the output
to a file). After each GC cycle, the RTGC prints information that was gathered during the cycle.
For more advanced tuning or debugging, the -XX:+RTGCPrintStatistics
command-line option provides additional detailed information.
See Examples of Printing RTGC Information
for sample output from these options.
With respect to critical threads, the most important number to look at
in the PrintGC output is the
"worst" non-fragmented (that is, allocatable) memory, which represents the lowest level
of allocatable memory since the VM startup.
If this value falls to 0, this means that a critical thread was blocked during an
allocation. This is a clear indication that you must increase RTGCCriticalReservedBytes.
On the other hand, if RTGCCriticalReservedBytes is too high, the RTGC
will run too often at RTGCBoostedPriority
and block non-critical threads. This decreases
the throughput. At worst, it will run continuously and prevent the
non-critical threads from making any progress. This might even block your
system. However, you will still be able to see the RTGC messages.
PrintGC
The PrintGC output also provides the "minimum" level of non-fragmented memory
during this last cycle. You can compare this minimum value with the overall (worst) minimum.
With respect to non-critical real-time threads, the most
important number is the priority at which the RTGC ends. All the threads below
that priority have suffered from jitter during that GC cycle. This is
not an error. It comes from the balance between throughput and
determinism. Expert users can try to improve the determinism by
ensuring that the worst case is taken into account for a longer time
(by reducing the slide factors in the auto-tuning mechanism)
and/or that the RTGC can cope with
quicker changes in the allocation rates (by increasing the safety
margins or the minimum threshold). To optimize the throughput, you
can try to tune the RTGC so that the minimum free memory is nearly
equal to the memory threshold that would have caused the RTGC to enter the
next mode. If RTGCBoostedWorkers is set, you should also
determine whether the RTGCCriticalReservedBytes boundary was
reached by checking the RTGC output; if the boundary was reached,
the RTGC prints the number of blocked threads.
In addition, if a non-critical thread blocked on allocation, the "worst" non-fragmented
memory will be no more than the value of RTGCCriticalReservedBytes.
The RTGC is truly concurrent. The only time it prevents a
thread from running is when it has to look at this particular thread's Java
stack, where the local variables and operands are stored. Hence, the
main potential source of pause time for a given thread is caused by
the scanning of its stack. Since a thread is not impacted by the scanning of
the other thread stacks, the pause time for a thread is smaller than
with non-concurrent GCs.
To reduce this pause time even further, threads should execute in compiled
mode. See the Java RTS Compilation Guide for
further details on how to compile the methods executed by
time-critical code. In general, the pause time primarily depends on
the stack's depth. Pause times are very small for threads that do not
extensively perform recursive method calls.
Do not forget that there are a lot of other jitter sources
that might look like RTGC pause times. Java RTS provides solutions to avoid them.
For instance, the
Java RTS Compilation Guide covers the issues
related to late class loading and initialization. Remember also
that only real-time priorities provide real-time
guarantees. For priorities lower than real-time,
the threads are scheduled with a time-sharing
policy to reduce the risks of starvation in the non real-time part
of the application. Hence, a thread at a priority lower than real-time can be
preempted at any time to let lower priority threads make some progress.
Java RTS provides a garbage collector API,
useable from the application code, which allows you to
dynamically change the RTGC parameters either locally from your Java
application or remotely through management beans.
For additional information, look in the Javadoc for the
FullyConcurrentGarbageCollector class. The
MBeans are registered in the realtime.rtgc namespace and
can be viewed with, for example, the JConsole tool. We have also
extended the LastGCInfo attribute of the garbage collector MBean.
To see a description of the attributes
and operations of these MBeans, refer to the corresponding Javadocs:
All these parameters are intended for a production environment.
These parameters are also listed in the Java RTS Options document.
Note that the RTGC checks the logical consistency of the parameter values
and adjusts them accordingly. For example, if you specify a value for RTGCBoostedPriority
that is higher than the value for RTGCCriticalBoundary, then the RTGC
will set the value of RTGCBoostedPriority equal to that of RTGCCriticalBoundary,
with an accompanying message.
Caution: The "-XX" options are not
part of
the official Java API and can vary from one release to the other.
Parameter
Default
Env
Description
0
prod
Free memory threshold, in bytes, under which only
critical threads can allocate memory.
To guarantee determinism, configure this parameter.
See Tuning the RTGC..
false
Print information gathered whenever the GC runs.
See Examples of Printing RTGC Information
for sample output.
UseRTGC
true
Use the new real-time garbage collector instead of the
non-real-time, serial collector.
RTGCNormalPriority
11
Normal priority for the RTGC. It defaults to 11 to try
to offer soft real-time behavior for all the real-time threads.
Initial number of parallel RTGC worker threads. 0 means no
limit. 1 is for non-parallel RTGC.
Boosted number of parallel RTGC worker threads. 0 means no
limit. 1 is for non-parallel RTGC.
RTGCPrintStatistics
Print additional information gathered whenever the RTGC
runs, for advanced tuning or debugging..
The following options are obsolete and should not be used.
They will be completely removed in a future version of Java RTS.
Explanation
RTGCCriticalPriority
This parameter was previously used to indicate the priority at which the RTGC
should execute in order to guarantee determinism, that is, it represented
the division in priority levels between critical threads and non-critical threads.
This parameter has been replaced by two new parameters: RTGCCriticalBoundary
and RTGCBoostedPriority. These new parameters allow finer RTGC tuning.
For this release, if this obsolete parameter is set, and the two new
parameters are not set, then the two new parameters will be set to the value of
this obsolete parameter.
RTGCMaxCPUs
This parameter was previously used to indicate the maximum number of
parallel RTGC threads that could be used.
This parameter has been replaced by two new parameters: RTGCNormalWorkers
and RTGCBoostedWorkers. These new parameters allow finer RTGC tuning.
This section first provides examples of the output from the XX:+PrintGC
and the XX:+RTGCPrintStatistics command-line options. Then it presents
the output from a bad (non-determinstic) configuration, with an analysis.
Finally, it shows the output that results from correcting the configuration.
XX:+PrintGC
XX:+RTGCPrintStatistics
When the XX:+PrintGC command-line option is specified, Java RTS
prints information gathered during the last GC cycle, as shown below:
}
, 0.0635765 secs]
With the XX:+RTGCPrintStatistics command-line option,
Java RTS prints additional information about the GC cycle, as shown in the following example:
}
GC cycle stats:
RTGC completed in deterministic mode (3 workers at priority 40)
End free bytes: 105152 bytes
Min free bytes: 1024 bytes
Next boosted threshold: 72960 bytes
Next normal threshold: 139840 bytes
Allocations during the GC:
in deterministic mode: 60896 bytes
(worst 60896 bytes, average 60896 bytes)
in boosted+deterministic modes: 60896 bytes
(worst 60896 bytes, average 60896 bytes)
in normal+boosted+deterministic modes: 60896 bytes
(worst 60896 bytes, average 60896 bytes)
Total CPU cost in nanoseconds: 66093516
Pacing CPU cost: 100667 (0 %)
Serial CPU cost: 5858823 (8 %)
Parallel worker_0 CPU cost: 10220120 (15 %)
Parallel worker_1 CPU cost: 46879782 (70 %)
Parallel worker_2 CPU cost: 2175946 (3 %)
Bytes allocated by critical threads:
in deterministic mode: 0 bytes
total for this cycle: 0 bytes
grand total: 0 bytes
Minimum RTGCCriticalReservedBytes: 0 bytes
, 0.0635765 secs]
The output is largely self-explanatory. However, note the following points:
The following test output from XX:+RTGCPrintStatistics shows that the
Java RTS VM was not configured to be deterministic.
[ 63823K->18950K(65536K, non fragmented: current 46585K / min 1050K / worst 0K,
blocked threads: max 0 / still blocked 0 requesting 0 bytes,
dark matter: 17572K in 16299 blocks smaller than 2048 bytes)
<completed without boosting> {CPU load: 2.22 recycling / 0.06 since last GC}
GC cycle stats:
RTGC completed in normal mode (4 CPUs at priority 15)
End free bytes: 47695680 bytes
Min free bytes: 17111136 bytes
Next boosted threshold: 171960 bytes
Next normal threshold: 1676650 bytes
Allocations during the GC:
in deterministic mode: 0 bytes
(worst 243072 bytes, average 126566 bytes)
in boosted+deterministic modes: 0 bytes
(worst 243072 bytes, average 126566 bytes)
in normal+boosted+deterministic modes: 1367904 bytes
(worst 1461152 bytes, average 841890 bytes)
Total CPU cost: 231007716 nanoseconds
Pacing CPU cost: 285081 (0 %)
Serial CPU cost: 6322420 (2 %)
Parallel worker_0 CPU cost: 89724588 (38 %)
Parallel worker_1 CPU cost: 43886333 (18 %)
Parallel worker_2 CPU cost: 46492001 (20 %)
Parallel worker_3 CPU cost: 44295704 (19 %)
Bytes allocated by critical threads:
in deterministic mode: 0 bytes
total for this cycle: 16128 bytes
grand total: 5322592 bytes
Minimum RTGCCriticalReservedBytes: 9216 bytes
, 0.1054441 secs]
The value of the "worst" non-fragmented memory on the first line of the output
shows that at least one thread blocked during allocation.
Since the "grand total" for "Bytes allocated by critical threads" is not zero,
that means that there were, in fact, some critical threads, and therefore,
RTGCCriticalReservedBytes must be set to a value at least equal to
the amount of memory needed by these critical threads. And to guarantee the determinism
of the critical threads, you should add a very large safety margin to this value.
In this case, since the "worst" value was zero, the RTGCCriticalReservedBytes
value was set too low. The "Minimum RTGCCriticalReservedBytes" figure shows that
at one point the reserved memory for critical threads was 9216, which is very low.
The following output from XX:+RTGCPrintStatistics shows what
happens in the same test when we set RTGCCriticalReservedBytes
to 1 Mb. Determinism has now been ensured for the critical threads.
[ 62798K->16857K(65536K, non fragmented: current 48678K / min 2068K / worst 1016K,
blocked threads: max 0 / still blocked 0 requesting 0 bytes,
dark matter: 15426K in 16298 blocks smaller than 2048 bytes)
<completed without boosting> {CPU load: 2.24 recycling / 0.06 since last GC}
GC cycle stats:
RTGC completed in normal mode (4 CPUs at priority 15)
End free bytes: 49825664 bytes
Min free bytes: 15991968 bytes
Next boosted threshold: 1266976 bytes
Next normal threshold: 2853616 bytes
Allocations during the GC:
in deterministic mode: 0 bytes
(worst 277600 bytes, average 160552 bytes)
in boosted+deterministic modes: 0 bytes
(worst 277600 bytes, average 160552 bytes)
in normal+boosted+deterministic modes: 1442400 bytes
(worst 1450688 bytes, average 889055 bytes)
Total CPU cost: 244487405 nanoseconds
Pacing CPU cost: 290305 (0 %)
Serial CPU cost: 6557720 (2 %)
Parallel worker_0 CPU cost: 46866569 (19 %)
Parallel worker_1 CPU cost: 93580338 (38 %)
Parallel worker_2 CPU cost: 50525021 (20 %)
Parallel worker_3 CPU cost: 46592857 (19 %)
Bytes allocated by critical threads:
in deterministic mode: 0 bytes
total for this cycle: 15616 bytes
grand total: 4763680 bytes
Minimum RTGCCriticalReservedBytes: 17440 bytes
, 0.1102827 secs]
With RTGCCriticalReservedBytes set to 1 Mb, the "worst"
non-fragmented memory level is 1016 Kb. Therefore, memory was never exhausted,
and determinism has been assured for the critical threads. However, since
"worst" is lower than the value of RTGCCriticalReservedBytes,
some non-critical threads might have blocked on allocation.
|
http://docs.oracle.com/javase/realtime/doc_2.1/release/JavaRTSGarbageCollection.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
I am a college student just starting out with java and I am working with pretty easy stuff here (I guess, lol), but im just trying to learn one bit at a time. I am just displaying my values in different ways and I can't seem to get my values to display one at time on new lines. I dont know why the \n isnt compiling. The part i am having trouble with has the "Not Working" tag in the code.
[highlight=Java]
//Trying to figure out how to use /n
public class Test
{
public static void main(String[] args)
{
int gold= 5;
int silver= 3;
int bronze= 1;
//Stacked titles
System.out.println(" gold\n silver\n bronze");
//Values stacked (not working)
System.out.print (gold\n silver\n bronze);
//Values side by side
System.out.println( gold + " " + silver + " " + bronze);
//Values added together
System.out.println(gold + silver + bronze);
}
}
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/35830-easy-question-pro-printingthethread.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
A.
Click the Source tab at the bottom of the editor to view the XML source code in the WSDL document.
The root element of a WSDL document is the
definition element, which declares the namespaces used in the document:
xmlnsattribute specifies that attribute names without a namespace qualification are in the default namespace,.
targetNamespaceattribute specifies that elements in the WSDL declare names in the namespace,
urn:CreditRating.
xmlns:tnsattribute specifies that names beginning with the prefix
tnsare in the namespace,
urn:CreditRating.
The entire web service description is defined within the
<definitions></definitions> tags, using the following elements:
typeselement: Imports or contains the schema definitions of the data types to be used in the messages that will compose the service.
messageelement: Defines the content of a message that is to be supported by the service. A
messageelement can contain one or more
partelements. Each
partelement is associated with a data type. In the example, when you imported the XML schema into the WSDL, JDeveloper added a
messageelement for you.
portTypeelement: Describes the operations of the service from messages. A
portTypeelement contains one or more
operationelements. Each
operationelement, which describes an interaction pattern between a client and a server, may contain an input message, an output message, and a fault message. The order of the
inputand
outputelements within the
operationelement determines the order in which the messages occur within the operation.
bindingelement: Describes a specific communication protocol for each
messagein each
operationof a
portTypeelement.
serviceelement: Describes a web service as a collection of
portelements. A
portelement defines a specific network address for a
binding. A
soap:addresselement within a
portelement specifies that the
portreceives SOAP messages.
For complete information on WSDL and the core elements, see the W3C Web Services Description Language page at
|
http://www.oracle.com/technetwork/developer-tools/jdev/ccset17-tellme2-4-095501.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Details
- Type:
Bug
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: None
- Labels:None
- Environment:
Operating System: Windows XP
Platform: PC
Description
Hi, for all that might use this class:
several things I found when using this class to calculate the
cumulative probability. I attached my code FYI. three things:
1. when I used my code to calculate the cumulativeProbability(50) of
5000 200 100 (Population size, number of successes, sample size),
result was greater than 1 (1.0000000000134985);
2. when I calculated cumulativeProbability(50) and
cumulativeProbability(51) for the distribution 5000 200 100, I got the
same results, but it should have been different;
2. the cumulativeProbability
returns "for this distribution, X,
P(X<=x)", but most of the time (at least in my case) what I care about
is the upper tail (X>=x). based on the above findings, I can't simply
use 1-cumulativeProbability(x-1) to get what I want.
here's what I think might be related to the problem: since the
cumulativeProbability is calculating the lower tail (X<=x), a
distribution like above often has this probability very close to 1;
thus it's difficult to record a number that = 1-1E-50 'cause all you
can do is record sth like 0.9999..... and further digits will be
rounded. to avoid this, I suggest adding a new function to calculate
upper tail or change this to calculate x in a range like (n<=x<=m), in
addition to fix the overflow of the current function.
thank you for your patience to get here. I'm a newbie but I've asked
Java experts in our lab about this. looking into the source code really
isn't up for me......hope someone can fix it,
BTW I'm using cygwin under
WinXP pro SP2, with Java SDK 1.4.2_09 build b05, and the commons-math I used is
both the 1.0 and the nightly build of 8-15-05.
the code:
-------------------
import org.apache.commons.math.distribution.HypergeometricDistributionImpl;
class HyperGeometricProbability {
public static void main(String args[]) {
if(args.length != 4){ System.out.println("USAGE: java HyperGeometricProbabilityCalc [population] [numsuccess] [sample] [overlap]"); }
else {
String population = args[0];
String numsuccess = args[1];
String sample = args[2];
String overlap = args[3];
int populationI = Integer.parseInt(population);
int numsuccessI = Integer.parseInt(numsuccess);
int sampleI = Integer.parseInt(sample);
int overlapI = Integer.parseInt(overlap);
HypergeometricDistributionImpl hDist = new
HypergeometricDistributionImpl(populationI, numsuccessI, sampleI);
double raw_probability = 1.0;
double cumPro = 1.0;
double real_cumPro = 1.0;
try {
if (0 < overlapI && 0 < numsuccessI && 0 < sampleI)
}
catch (Exception e)
}
}
}
----------------------------------
Activity
- All
- Work Log
- History
- Activity
- Transitions
|
https://issues.apache.org/jira/browse/MATH-100
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Databases
»
Object Relational Mapping
Author
[Solved] [Problem] Referencing two objects with OneToOne relation
Joe Schronstein
Greenhorn
Joined: Apr 18, 2009
Posts: 2
posted
Apr 18, 2009 12:56:59
0
Hi,
I am working on a project with Hibernate and this is my situation. I have a base class A and two classes B and C that extend it. This works fine as far as I know with the following code:
@Entity @Inheritance(strategy=InheritanceType.JOINED) public class A implements Serializable { @Version private long version; @Id @GeneratedValue private long id; // ... } @Entity @PrimaryKeyJoinColumn(name="A_ID") public class B extends A { // additional properties here // ... } @Entity @PrimaryKeyJoinColumn(name="A_ID") public class C extends A { // additional properties here // ... }
So, Hibernate creates a table for every class, that's A, B and C. It is the "Joined subclasses" method of the Hibernate Annotations Reference Guide. This seems to work fine as mentioned above.
Now, I have a class named Wrapper which has a property of class B and C and
only
one, like this:
@Entity public class Wrapper implements Serializable { @Id @GeneratedValue private long id; private B instance_B; private C instance_C; // ... }
I thought about realizing a
OneToOne
relation (with shared primary keys) between class B and Wrapper and C and Wrapper using something like explained
here
but I was not able to achieve it or was not sure if it was correct. Than I thought about using just a foreign key for the classes B and C pointing to the Wrapper with a unique constraint but I do not know how.
Another idea would be to set the id of the instances of the classes B and C to the id of the Wrapper when setting instance_B and instance_C. I do not know if this is the right way and I am very doubtful about that.
I hope that this is comprehensible and that somebody can help me out of this. Perhaps somebody has a better idea than I had.
With best regards
Joe Schronstein
Greenhorn
Joined: Apr 18, 2009
Posts: 2
posted
Apr 19, 2009 04:59:56
0
Ok, I finally got it myself...
The Reference Documentation states that you can also establish a
OneToOne
relation by adding foreign keys. For my case this would be adding foreign keys to the table of the Wrapper for instance_B and instance_C. So, those foreign keys reference to a row in the tables B and C (more exact to the table of A because B and C share the primary key with A because of inheritance).
Doing this like explained in the hibernate documentation for the annotations you have to put some annotations above the Getter of your property (here the getters of instance_B and instance_C). Like this:
@OneToOne(cascade = CascadeType.ALL) @JoinColumn(name="name_of_fk") public B getB() { return instance_B; }
This does not work
(in my case). Hibernate just put the entries as a binary type into a column of the DB.
Now, I wrote the annotations shown in the code directly above the declaration of the two properties:
@OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = "name_of_fk_B") private B instance_B; @OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = "name_of_fk_C") private C instance_C;
This finally works for me.
Hibernate takes the primary key out of the associated rows of the tables of B and C and uses them as foreign keys in the columns name_of_fk_B and name_of_fk_C of the rows of the table Wrapper.
Yeapeeh.
Is there anybody else experiencing those problems?
I agree. Here's the link:
subject: [Solved] [Problem] Referencing two objects with OneToOne relation
Similar Threads
Naming child-parent relation with JPA Annotation
OneToOne shared PK isn't setting key in related table during insertion
EJB3/JPA @OneToOne
one to one unidirectional mapping in jpa
can i mark a Set/Collection porpety as @Id
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/441700/ORM/databases/Solved-Referencing-objects-OneToOne-relation
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Factors Affecting The Risk Associated With An Investment Finance Essay
This paper highlights the financial risk management by developing a deeper understanding into the topic. The paper discusses the risks involved in the business. The paper also gives insights about the risk management, what are the factors affecting the risk associated with an investment. Financial risks are discussed to gain knowledge about the risks involved in the financial matters of the organization. The paper discusses about the financial risk management at length by first defining the financial risk management then specifying the steps required to manage the financial risks. The paper then discusses about the tools and policies required to manage the financial risks associated with an investment.
Introduction
The paper is designed to understand the basic concepts involved in the field of finance by taking a look at the basics of risk then getting deeper into the financial risk and thereby analyzing the financial risk management, its process and the tools required for the measurement of the risk associated with the investment.
The purpose of the study is to understand the basics of the risks associated with the investment. It also discusses about the risk management by defining and understanding the process involved in the risk management. Further, the paper analyzes the financial risks by getting deeper understanding of the same by uncovering the hidden aspects of the financial risk. The paper gives deep knowledge about the financial risk management by giving a step to step description of the process and then describing the tools, techniques and strategies for the measurement and management of the financial risk management.
For almost ages, risks have been a part of our life, then whether it be the humans or the business. Risks have become a part and parcel of our everyday life. The intensity of the risk involved varies as per the situation at hand. Risk and Risk management are not occurring at the same time.
In laymen terms, risk is something the output of which is usually different from the expected output. But in business terms, it is the probability that an investment's desired return is different from the original return. It is the probability of losing all or part of the actual investment. There is a trade-off between the risk and the return. If the amount of risk involved in the investment in higher, then greater is the expected potential return of investment. In the recent years, risks associated with the investment in the businesses have increased. But on the other hand, the businesses which have taken risk have significantly improved their profits as against the expectations from the other businesses.
Risk management is a systematic process of identifying and assessing company risks and taking actions to protect a company against them. Broadly speaking, Risk Management is working today with the chance that future events may prove to be harmful for the organization and may cause adverse effects on the profitability of the organization or the investment. It is a two step process - determining the risks likely to occur in the investment and then managing those in the best possible way as per the investment requirements. Good risk management strategies help the organization in growing faster resulting in reduction of costs and improved performance. There are many types of Risk Management, a few of them are Financial risk Management, Operational Risk Management, Market Risk Management etc.
Organizations are getting affected either directly or directly with the continuous and increasing exposure to financial markets. When an organization exposes itself to the financial markets, there are chances of loss as against the opportunities for gain and profit. Exposure in the financial markets provides competitive and strategic benefits.
Financial Risks
Financial risk is an umbrella term for any risk associated with any form of financing. Typically, in finance, risk is synonymous with downside risk and is intimately related to the shortfall, or the difference between the actual return and the expected return (when the actual return is less). Financial risks occur through generally uncounted dealings of financial nature, including investments and loans, sales and purchase and various other activities useful to the business. It arises when the expected return of the investment is different from the original investment. It deals with the downside factors affecting the working of the organization and hence affecting the performance and profitability of the organization.
In today's constantly changing world, the financial prices are changing continuously which lead to increase in costs, reduce revenues and adversely affect the profitability and hence the performance of the organizations. These fluctuations in the financial prices can make it difficult to decide the price of goods and services and allocate the capital for the various process running in the organization.
There are three primary reasons for financial risk:-
Financial risks arising from the exposure of the organization to sudden changes in the market price, such as interest rates, exchange rates
Financial risks arising from the interaction with other organizations such as suppliers, customers in derivative transactions
Financial risks arising from internal failures due to people, processes and systems
The components of the financial risk are profit potential and risk, debt and risk and alpha, beta and risk. If the intensity of the investment is more, then the potential for the profit is also more and no organization would like to take risk until and unless profits exists in the investment. The intensity of financial risk taken by an organization affects the return from the potential investment. The amount of debt an organization realizes also directly affects the intensity and extent of the total financial risk associated with the investment. "Alpha" and "Beta" values of an investment play a major role in the total risk associated with an investment. Alpha risk of an investment is the risk associated with an investment of an organization in comparison to other organizations. Beta value of an investment is the risk associated with an investment's fluctuations in view of the whole market.
Financial Risk Management
Financial Risk Management is defined as the practices and procedures that a company uses to optimize the amount of risk it handles with its financial risk. Financial Risk Management is the method to deal with the irregularities resulting from the sudden changes in the financial markets. It involves assessing the financial risks facing an organization and developing management strategies consistent with the internal priorities and the policies. When the financial risks are taken into consideration, it provides the organization with a competitive advantage over other organizations. It assures that everybody in the organization agrees on key issues of risk. Managing financial risk involves the organizations to make financial decisions about acceptable risks as against the unacceptable risks.
There are different ways to manage financial risks which include a range of strategies and products. However, it is important to understand how these risks are reduced as per the organization's risk acceptance and objectives.
The strategies for risk management frequently include derivatives. Derivatives are frequently traded widely among financial institutions and on organized exchanges. Trading in derivatives spans from commodities, equity and fixed income securities, exchange rates etc. The worth of derivative contracts, like futures, options, and swaps is determined from the price of the existing investment.
There is similarity between the products and strategies used by market participants for managing the financial risk as that used by the speculators for increasing the risk associates with the investment and the leverage. As the use of derivative increases risks associated with the investment, the existence of derivatives allows one to pass the risk to the people who look for risk and its associated opportunities.
The probability of a financial loss is highly desirable. The analysis of financial markets is very crucial as often the standard theories of probability fail in such situations. Risks usually exist in combination with other exposures. The interaction between these types of exposures is important to understand how the financial risk stems up. Sometimes, these type of interactions to exposures is difficult to forecast as they usually depend on human behaviour.
The process of financial risk management is a continuous process and keeps on going continuously in the organization. The strategies used for managing financial risk are implemented and revised as the market situations and the requirements change. Refinements may reflect changing expectations about market rates, changes to the business environment, or changing international political conditions.
Methodology
The risk management process consists of strategies that allow the organization to handle the risks associated with investment in financial markets. The risk management is a dynamic process which evolves inside the boundaries of the organization. It involves and adversely affects many roles in the organization including tax, commodity, treasury etc. The risk management process involves the analysis of both the internal and external risks associated with the investment.
The risk management process is a four step process which includes:
Identification and setting the priorities for the major financial risks associated with the investment
Determining accepted level of tolerance of risk
Applying the risk management strategies in lieu of the policies developed by the organization
Compute, report, examine, and revise the risks associated with the investment if required
The first step in the risk management process is the identification of the financial risks associated with the investment and setting the priorities for the same in the organization. For this purpose, it becomes necessary to examine the entire portfolio of products in the organization ranging from management to competitors to pricing and position in the whole industry. After a clear view on the financial risks is established, the strategies developed by the organization can be implemented in combination with the risk management policies framed by the organization to minimize the potential effects of financial risks. Computation and reporting of risks enables authorities to implement the decisions for the financial risks and to evaluate their outcome, both in the past and in the present after strategies are taken to tone down the risks in the investment.
The risk management is a continuous process, reporting and feedback can revise the whole system by modifying or improving the strategies taken to manage the financial risks involved. A dynamic decision making authority plays a significant role in the risk management process. The decisions regarding the probable loss and various risk minimization techniques provide a stage for the discussion of important at length and the differing opinions of the stakeholders.
Broadly speaking, there are three main ways for managing risk:-
Accepting all the risks as they by sitting dormant and doing nothing
Hedging a part of the risks by identifying risks which can be hedged
Hedging all the risks possible
Factors affecting Financial Rates and Prices
There are a number of factors which affect the financial rates and prices of the investment. These factors include the fluctuations in the interest rates, exchange rates, commodity prices etc. These factors are important to understand as they directly or indirectly affect the risk associated with the investment by an organization.
Interest rates are considered as the economic barometer for analyzing the financial risks associated with an investment. Interest rates are made up of two parts, namely the real rate and expected inflation. If the maturity period for an investment is higher, then the uncertainty associated with the investment if higher. Interest rates form an important part in cost of capital. Companies opt for debt financing for the purpose of extension and projects relating to the capital in the organization. Borrowers are adversely affected when there is sudden change in the interest rates. Level of inflation also affects the interest rates in turn affecting the financial risk associated with an investment. Economic situations prevailing in the world and market actions of the foreign exchanges also affect the financial risk. Some other factors affecting financial risk include monetary policy and the stand of apex bank of the country, financial and political steadiness in the country. Yield curves serve as the basis for providing useful information about the future level of interest rates. The form of the yield curve is generally analyzed and evaluated by market forces prevailing in the market. It provides insights into the changing economic situations in the market as a result of sudden changes in the economy of the country. Several theories have suggested for the determination of interest rates and as a result the yield curve. Some of them are Expectations theory, Liquidity theory, Preferred Habitat Hypothesis theory and Market Segmentation theory.
The demand and supply of the currencies in the market predict the foreign exchange rates. The factors affecting the supply and demand are the sudden changes in the economy of the country, the activity of foreign trade and international investors. Capital flows play an important role in the determination of exchanges rates prevailing in the market. Some factors affecting the exchange rates and the interest rate level are common. Some of these factors are floating or market-determined currencies. Currencies are very critical as even a slight change in the market situations affect the interest rates and the risk factors associated with the investment. Some of the major factors affecting the exchange rates are buying/ selling activity in other currencies, international capital and flow of trade in the global market, views of the foreign institutional investor (FII), financial and political steadiness in the country, monetary policy and the stand of the apex bank of the country, in-country debt levels, economic situations prevailing in the country. Earlier, trading of goods and services with other nations was considered as the key for determining the exchange-rate changes. Nowadays flow of capital in the market is considered very crucial and evaluated very closely. Some theories for the determination of the exchange rates are Purchasing power parity based on "the law of one price", Balance of payments, Monetary approach and Asset approach.
The demand and supply also affect the price of the physical commodity. The value of commodities also gets affected by location and physical quality of the commodity. Supply of commodities to the end customer is a part of the production function. Supply of the commodity gets significantly affected if the production schedule of the organization is disturbed. Demand of commodities gets affected if the end user is getting the same or similar commodity at lower rates than that offered by the organization. Some of the major factors affecting the commodity prices are prevailing interest rates in the market, level of inflation especially of precious metals, currency exchange rates in the market, economic situations governing the country, production costs and capability to deliver to the end users, political steadiness, and availability of alternatives. There are no defined models for the determination of price of the physical commodity.
Tools for Financial Risk Management
Diversification
It is an important and widely used tool in financial risk management. In the past, the risk in the investment was judged only on the basis of changeability of its returns. In recent times, theories not only consider investment's riskiness but also the overall riskiness of the portfolio. Organizations can reduce the investment's risk through the diversification of the risk.
Portfolio Management provides the opportunities for the diversification of the risk by the combination of individual entities to the portfolio. A diversified portfolio includes investments which are loosely interrelated to each other. Diversification among intermediaries of the organization may reduce the risk caused due to unpredicted events impacting the organization through defaults. It also significantly decreases the impact of loss if one issuing party fails. Diversification among customers, suppliers and financing sources reduces the downside effects that adversely affect the organization's business by the risks which are not under management's control.
Hedging is another method through which the financial risks associated with an investment can be minimized. It is the business of seeking assets that offset, or have weak or negative correlation to, an organization's financial exposures.
Correlation is another method measuring the financial risks associated with the investment. It measures the tendency that two assets can go together or not. The value of the correlation coefficient lies in between +1 and -1. If the value of the correlation coefficient is +1 (positive correlation), then the two assets can go together. If the value of the correlation coefficient is -1 (negative correlation), then the two assets can go together but in opposite directions. Negative correlation is the central element for hedging and risk management.
Some other tools for the financial risk management are as follows:-
Value at Risk (VAR) model - It is a type of quantitative risk management tool and is widely used model for the measurement of risk for the loss of money on the portfolio of investments in the financial assets. It is the process of providing an answer to the investor regarding the investor's potential loss of investment within reasonable bounds. It measures the loss in the value of an investment for a definite period of time. Thus, if the value at risk for an investment is $50 million for a week at 90% confidence interval then probability that the value of the investment falls below $50 million is only 10% for a definite time period of a week. It is generally used by commercial and investment banks to grab the loss in the portfolio.
Stress Testing - It is also a type of quantitative risk management tool. These types of tests are generally used in the banks for the risk management system to evaluate the sudden changes in the financial components resulting in the risks for the whole system. Stress tests are usually of two types, sensitivity tests and scenario tests. Sensitivity tests analyze the impact of one component on the position of the bank in the financial market. Scenario tests involve simultaneous analysis of an event which occurred in the past or is yet to happen.
Risk and Control Assessment - The heads of the various business units continuously review the organization practices and guidelines set by the organization to prepare the "risk and control self-assessment" or RCSA reports. A "risk and control self-assessment" report is a document containing the internal risks and controls. It rates the risks as "low", "medium", "high" depending on the expected losses from the investment. Higher management usually focus on risks designated as "high" and "medium". Government authorities like Securities and Exchange Board of India often provide instructions o the company to include the RCSA reports.
Insurance Coverage - Insurance coverage of the financial risks allows the firm to escape from potential losses of credit transactions in the market. This type of protection is very useful in exchange between international partners because fluctuations in the currency and risks arising from political instability which usually increase the financial risk associated with the investment.
Financial Risk Audit - The Company's audit department performs timely review of controls and practices ensuring that such controls are sufficient and working. The company assigns a risk auditor to analyze the procedures for ensuring accuracy and completeness in the financial statements.
Credit risk management - It is the loss generating from the borrower's inefficiency to pay back the loan on the maturity or adhering to the financial promises. Defaulters arise as a result of the invisibility of money from the organization (bankruptcy) or unexpected cash problems.
Conclusion
Risk is something the output of which is usually different from the expected output. Risk Management is working today with the chance that future events may prove to be harmful for the organization and may cause adverse effects on the profitability of the organization or the investment. If the risks associated with the investment are higher, it leads to higher returns from the investment. Financial risks occur through generally uncounted dealings of financial nature, including investments and loans, sales and purchase and various other activities useful to the business. The components of the financial risk are profit potential and risk, debt and risk and alpha, beta and risk. Financial Risk Management is the method to deal with the irregularities resulting from the sudden changes in the financial markets. The strategies for risk management frequently include derivatives. There are a number of factors which affect the financial rates and prices of the investment. These factors include the fluctuations in the interest rates, exchange rates, commodity prices etc. Some of the tools for the financial risk management are diversification of risks, value at risk model, stress testing, risk and control assessment etc. Financial Risk Management reduces the risk associated with an investment by properly defining the process to be followed for the risk management. It increases the efficiency of the organization and also increasing the profitability of the investment.
Request Removal
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
Request the removal of this essay
|
http://www.ukessays.com/essays/finance/factors-affecting-the-risk-associated-with-an-investment-finance-essay.php
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Are you M Brochier?Claim your profile
Publications (183)103.45 Total impact
Article: [Angioplasty after failure of thrombolysis in myocardial infarction. Hospital results apropos of 40 consecutive patients].[Show abstract] [Hide abstract]
ABSTRACT: Intravenous thrombolysis during the acute phase of myocardial infarction is successful in restoring perfusion in 60 to 80% of cases. When it is unsuccessful, there is disagreement about the best approach to adopt. The article reports the results obtained in 40 consecutive patients treated by angioplasty after thrombolysis had been unsuccessful. Reperfusion was achieved in 92.5% of cases, with a hospital mortality rate of 7.5% (2.5% if patients admitted in a stage of cardiogenic shock are excluded). There was no mortality related to the procedure itself and an emergency aorto-coronary by-pass was not required in any case. Since it is accepted that the subsequent prognosis depends on coronary patency, coronary artery assessment after thrombolysis, followed by angioplasty if the occlusion persists seems to be a logical strategy if the myocardial territory is compromised.Annales de Cardiologie et d Angéiologie 02/1992; 41(1):1-6. · 0.30 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: In order to develop a technique which allows the detection of Pattern A (PA) we present in this paper a series of steps for constructing an observation gril PA. These observations also confirm that the PA present a risk factor which is independent of classical risk factors. A significative positive correlation with work stress has been found showing, in accordance with the view of Friedman and Rosenman that the PA corresponds to a particular behavioral pattern which is dependent on the work environment.Annales de Cardiologie et d Angéiologie 10/1990; 39(7):397-402. · 0.30 Impact Factor
Article: Comparison of circadian blood pressure variations in hypertensive patients with renal artery stenosis and essential hypertension.[Show abstract] [Hide abstract]
ABSTRACT: Ambulatory blood pressure measurements in 20 hypertensive patients with uni- or bilateral renal artery stenosis were compared with those in 20 essential hypertensive patients. Analysis of the 24 hour blood pressure curve of the renal artery stenosis group shows a tendency to equalization of blood pressure throughout the day. The nocturnal decrease of systolic or diastolic blood pressure was not significantly different between the two groups (9.2 vs. 15.3 mmHg). The blunted curve seems to be related more to the severity of hypertension than to its aetiology, but further studies are required to elucidate this point.Journal of Human Hypertension 09/1990; 4(4):390-2. · 2.70 Impact Factor
- Cardiovascular Drugs and Therapy 09/1990; 4 Suppl 4(S4):824-5. DOI:10.1007/BF00051287 · 3.19 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: In order to develop a technique which allows the detection of Pattern A (P.A.) we present in this paper a series of steps for constructing an observation grill P.A. These observations also confirm that the P.A. present a risk factor which is independent of classical risk factors. A significative positive correlation with work stress has been found showing, in accordance with the view of Friedman and Rosenman that the P.A. corresponds to a particular behavioral pattern which is dependent on the work environment.Annales Médico-psychologiques revue psychiatrique 06/1990; 148(5):471-82. · 0.22 Impact Factor
Article: [Arterial and venous thromboembolic complications in patients with renal transplants. Apropos of 2 cases].[Show abstract] [Hide abstract]
ABSTRACT: The study of two cases of young patients with renal transplants who, successively and a few months after the procedure, presented a thrombophlebitis of the lower extremities (with or without pulmonary embolism), then an acute coronary insufficiency, without any encouraging or triggering factor, raises the hypothesis that this is not a mere coincidence. In fact, in the literature, numerous cardiovascular risk factors) inherent in complicated chronic renal failure, dialysis, steroid therapy and immuno-suppressive treatment (Azathioprime, under these circumstances) were demonstrated. In addition, abnormalities of the platelets aggregation, hemostasis and fibrinolysis, were at the origin of thrombo-embolic accidents. Besides any specific cardiovascular risk factor or any obvious biological anomaly, there is still a predisposition of patients with renal transplants, to arterial as well as venous thrombo-embolic accidents.Annales de Cardiologie et d Angéiologie 07/1989; 38(6):309-12. · 0.30 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The authors report a case of total anomalous pulmonary venous return (TAPVR) draining into the innominate (brachiocephalic) vein, discovered in a 40-year old male patient and successfully treated by surgery. This type of cardiopathy is usually suspected in the neonatal period, being badly tolerated, and such a prolonged survival is quite exceptional. Survival is conditioned by the site of the TAPVR, by the size of the atrial septal defect and also by the presence or absence of an obstacle to the pulmonary venous return and of pulmonary arterial hypertension. In the absence of pulmonary vascular disease, surgical correction is mandatory and results in regression of the symptoms and of the pulmonary arterial hypertension.Archives des maladies du coeur et des vaisseaux 06/1989; 82(5):815-7. · 0.40 Impact Factor
Article: Effects of a comprehensive rehabilitation programme in patients with three-vessel coronary disease.[Show abstract] [Hide abstract]
ABSTRACT: The aim of the study was to assess the effects of rehabilitation in 46 consecutive three-vessel coronary disease patients who were considered to have no possibility of revascularization; there were 45 males and one female (mean age 58) sent in the third week after acute myocardial infarction (N = 31) or after unstable angina (N = 15). Left ventricular ejection fraction (EF) was normal in 50% of the patients, but 15% had an EF less than or equal to 0.30. Three patients could not begin their rehabilitation because of unstable angina (N = 2) or severe pulmonary oedema (N = 1). After a 4-week rehabilitation programme, the comparison of stress tests revealed an increase in functional capacities (maximal work-load = 103.6 +/- 27 W before rehabilitation, 126.4 +/- 31 W after rehabilitation, P less than 0.001), and an improvement of the ischaemic threshold [82 +/- 32 W before rehabilitation, 91 +/- 31 W after rehabilitation, P less than 0.05]. During long-term follow up [20.8 months], four patients died of cardiac events [8.7%]; all of them had an EF less than 0.45. Among the 42 living patients 61.9% were asymptomatic, 28.7% had exertional angina, and 9.4% had cardiac complications, and coronary surgery was performed in two cases with good results. The level of return to work was 85% with the mean delay of 1.7 months after rehabilitation. So, rehabilitation in three-vessel coronary disease patients is safe under medical control; improvements in exertional capacities are obvious and give the patients a better self confidence as assessed by the good score of return to work after rehabilitation.European Heart Journal 12/1988; 9 Suppl M:28-31. DOI:10.1093/eurheartj/9.suppl_M.28 · 15.20 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: BRL 26921 (Eminase registered trade mark in Belgium, Germany and The Netherlands) is the p-anisoyl derivative of the primary (human) lys plasminogen-streptokinase activator complex (APSAC). The acyl-enzyme has the theoretical advantage of causing fibrinolysis in situ in the presence of fibrin clotbound plasminogen. It was administered to 34 patients with severe pulmonary embolism (PE) in an open multicentre study. PE was suspected on clinical, blood gas, ECG, and radiographic data. Pulmonary angiograms performed pre- and post-treatment confirmed the diagnosis and were assessed using the Miller Index (MI). Fibrinogen, plasminogen, alpha-2-antiplasmin, fibrinogen degradation products (FDP), activated partial thromboplastin time (APTT), partial thromboplastin time (PTT) were closely monitored before and after each administration of APSAC. Median angiographic improvement was 50% (range 0-94%). The following adverse events were reported: bleeding at puncture sites (n = 12), haematuria (n = 1), epistaxis (n = 3), fever (n = 2). A blood transfusion was given in one patient with an inguinal haematoma. Systemic fibrinogenolysis occurred in 20/28 patients.European Respiratory Journal 09/1988; 1(8):721-5. · 7.64 Impact Factor
Article: [Limitations of scintigraphy using metaiodobenzylguanidine for locating pheochromocytomas. Apropos of 2 cases].[Show abstract] [Hide abstract]
ABSTRACT: Two cases of MIBG (metaiodobenzylguanidine) scintigraphy are reported: the first case concerns a female patient hospitalized for high blood pressure (HBP) with symptoms evocative of pheochromocytoma. Urinary titration of catecholamines metabolites, which are usually abnormally high, and tomodensitometry permit the visualization of a left adrenal tumor. On the contrary, the MIBG scintigraphy does not show any abnormal fixation. After resection, the pathological examination confirms the diagnosis of pheochromocytoma. The second case concerns a female patient hospitalized for HBP with, on the chest X-Ray, a left postero-inferior density. Serum and urinary catecholamine levels are normal. Tomodensitometry confirms the tumor of the posterior mediastinum and the MIBG scintigraphy demonstrates a focus of thoracic opposite the tumor. After resection, the pathological examination shows an ectopic supernumerary bronchial bud. These two cases illustrate the limitations of MIBG scintigraphy to locate pheochromocytomas. There are false negative (10%) which may be explained by an insufficient uptake of the tracer by the tumor, by an insufficient image formation or by medication interferences. On the contrary, there may be false positives because of histochemical similarities between the chromaffin tissues and certain glandular or neural tumors. Nevertheless, in spite of serious limitations, which we must be aware of, MIBG scintigraphy remains the best primary examination for the location of pheochromocytomas.Annales de Cardiologie et d Angéiologie 04/1988; 37(3):147-51. · 0.30 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Arterial hypertension is frequently and at an early stage complicated by left ventricular hypertrophy, i.e. an increase in muscular mass due to the proliferation of myofibrillae. This in fact is a physiological mechanism aimed at maintaining systolic function and systemic blood flow rate. Left ventricular hypertrophy may be associated with myocardial alterations, such as increase of collagen, abnormalities of diastolic function, reduced contractility, increased cell excitability and disorders of coronary perfusion. It is responsible for a higher risk of cardiovascular mortality. Antihypertensive treatments, therefore, must not only bring blood pressure down to normal values, but also reduce the myocardial mass. In order to avoid a detrimental effect on coronary reserve, it is highly desirable that arterial hypertension and left ventricular hypertrophy regress simultaneously. Regression of the myocardial hypertrophy associated with arterial hypertension is observed with most antihypertensive drugs, except vasodilators that act directly on the vascular smooth muscle, probably due to stimulation of the sympathetic system. Diuretics also have an inconstant beneficial effect on left ventricular hypertrophy. When a choice has to be made between two drugs that have the same antihypertensive activity, it is the one that also brings about an early and lasting regression of myocardial hypertrophy which must be prescribed.La Presse Médicale 03/1988; 17(7):333-8. · 1.08 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Primary hyperaldosteronism (HA1) represent a rare etiology of arterial hypertension (less than 1%). It concerns, most of the time, aldosterone-producing adenomas or bilateral adrenal hyperplasias although intermediate forms have been reported. The diagnosis of HA1 is based on simple examinations, especially systematic measurement of kaliemia in every hypertensive patient with a normal sodium diet before treatment. The elevation of aldosterone blood levels associated with a low plasma renin activity confirms the autonomous nature of the hormonal secretion which is dissociated from the renin-angiotensin system. Study of the ratio aldosterone blood level/ARP and the captopril test are particularly useful in borderline cases. Once the diagnosis of HA1 is made, a topographic analysis may be undertaken; tomodensitometry and adrenal scintigraphy are currently the examinations of choice in the diagnosis of adrenal tumors. Due to biological, morphological and topographic factors, aldosterone-producing adenomas may be identified with a great deal of certainty: surgical excision ensures a cure in a large majority of cases. The treatment of bilateral hyperplasias remains medical.Annales de Cardiologie et d Angéiologie 12/1987; 36(9):495-501. · 0.30 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Fibrinolysis is a physiological process which aims at dissolving intravascular thrombi and is mediated by activation of plasminogen to plasmin. Streptokinase (SK) and urokinase (UK) are non-specific plasminogen activators. They have proved effective as thrombolytic agents, but their use is limited by the risk of haemorrhages due to systemic fibrinogenolysis. More fibrin-specific drugs have recently been developed. One is a tissue plasminogen activator (t-PA), the other is a urokinase precursor (pro-UK), also called single chain urokinase plasminogen activator (scu-PA). Genetic engineering techniques have resulted in the large-scale production of a "recombinant t-PA" (rt-PA) and a "recombinant scu-PA" (r scu-PA) for therapeutic use, notably in acute myocardial infarction. In vitro, these two drugs exhibit a thrombolytic activity that is equal to, or greater than that of SK or UK. In vivo, their fibrinogenolytic effect is less pronounced, and their thrombolytic effect greater than those of SK or UK. "Acyl-enzymes" have more recently emerged. These are inactive acylated SK-plasminogen complexes which progressively become effective in plasma after deacylation. So far, the most extensively studied of these complexes is BRL 26921 (anisoylated plasminogen streptokinase activator complex, or APSAC) which is administered by bolus intravenous injection. It is more thrombolytic than SK but produces systemic fibrinogenolysis to an equivalent degree. Injected intravenously (by infusion or bolus) during the first hours of a coronary infarction these three new thrombolytic agents have proved effective in promoting coronary reperfusion, with an early coronary patency rate of 70-75%.(ABSTRACT TRUNCATED AT 250 WORDS)Archives des maladies du coeur et des vaisseaux 12/1987; 80(12):1785-91. · 0.40 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The authors report 3 cases of acromegaly diagnosed while the patients were in hospital for cardiovascular disease: arterial hypertension in two and hypertrophic myocardiopathy in all three. Coronary arteriography was normal in the 3 patients. The exercise-induced dyspnoea observed in these 3 cases was unexplained by right and left cardiac catheterization results (normal pressures, normal or increased cardiac index). It was most probably related to the myocardial hypertrophy and to abnormalities in diastolic function demonstrated by radioisotopic methods in patients 2 and 3. The degree of myocardial hypertrophy present in these 3 patients seemed to correlate with the size of the pituitary adenoma and the plasma level of growth hormone rather than with the duration or degree of arterial hypertension. After excision of the pituitary adenoma hypertension persisted in 1 case, due to associated adrenal gland hyperplasia, and subsided in the other cases. Abnormalities of diastolic function and dyspnoea are gradually regressing but left ventricular hypertrophy has not significantly decreased after 6 post-operative months.Archives des maladies du coeur et des vaisseaux 11/1987; 80(11):1643-50. · 0.40 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: A case of leiomyosarcoma of the pulmonary artery in a 64-year old man without previous cardiovascular disease is reported. The clinical picture, which comprised episodes of paroxysmal dyspnoea associated with acute cor pulmonale, suggested pulmonary embolism. Radioisotope perfusion study and pulmonary angiography seemed to confirm this diagnosis, but no improvement was obtained with a prolonged thrombolytic treatment. The presence of a median mass at CT led to exploratory thoracotomy and to the finding of a tumour in the pulmonary artery, which turned out to be a leiomyosarcoma. The disease rapidly took an unfavourable course. Comparison of this case with data from the literature showed that primary tumours of the pulmonary artery are extremely rare, that they are diagnosed with difficulty and often at a late stage and that their prognosis is usually very sombre.Archives des maladies du coeur et des vaisseaux 09/1987; 80(9):1417-21. · 0.40 Impact Factor
Article: [Analysis of segmental kinetics of the left ventricle after intravenous administration of urokinase in the acute phase of myocardial infarction].[Show abstract] [Hide abstract]
ABSTRACT: From 1982 to 1984 included, 31 patients under 70 years of age were admitted during the first three hours of a primary myocardial infarction (MI) and are the subject of a randomized prospective study. 16 patients are treated with 5,000 U of heparin given in intravenous bolus, followed with 150,000 IU of urokinase (UK) in intravenous bolus, then 12,000 IU of UK/min for 90 min or a total dose of 1,230,000 IU. 15 patients are treated with heparin alone (intravenous bolus of 5,000 U). Repeated titrations of creatine phosphokinase (CPK) and the coagulation parameters are performed during the first 24 hours. A coronary angiography with ventriculography (RAO 30 degrees) is performed on the 1st day (D1) and the 3rd week (W3). Study of the left ventricular kinetics (LV) is carried out according to the Stanford method. At D1, the rate of coronary patency is 56 p cent (n = 9) in the UK (A) group and 53 p. cent (n = 8) in the control group B (heparin alone). The percentage of late re-thrombosis is 0 p. cent in group A and 12.5 p. cent in group B (heparin alone). 1 patient died in each group. The CPK peak is less high in case of coronary patency in group A than in case of thrombosis (1,444 +/- 413 vs 1,710 +/- 120 U -heparin alone) and occurs earlier (16 +/- 2 h vs 21 +/- 1 h). In group A a significant decrease of fibrinogen (p. 0.01) as well as plasminogen and alpha-2-antiplasmins (p less than 0.001), is noted. No severe haemorrhagic complications nor sustained rhythm disorders are noted.(ABSTRACT TRUNCATED AT 250 WORDS)Annales de Cardiologie et d Angéiologie 07/1987; 36(6):313-7. · 0.30 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The prourokinase-urokinase system physiologically contributes to fibrinolysis activation. It is therefore rational to envisage the use of urokinase in thrombotic diseases, and notably in the acute phase of myocardial infarction (MI) where coronary thrombosis is virtually constant. The main studies on this subject were published in 1975 and in 1985, thus reflecting the changes in therapeutic concepts that have occurred during these 10 years. The older studies concerned patients who were admitted within the first 12 hours of MI and had no early angiographic examination; the results were evaluated indirectly on clinical and enzymatic criteria and on the regression of electrical signs of myocardial suffering. The more recent studies concern patients who are treated at an early stage, often within the first 3 hours of the accident, on the basis of experimental data which favoured early coronary reperfusion as a means of protecting the myocardium; in these studies coronary arteriography is performed immediately after the thrombolytic treatment; computer-assisted studies of the left ventricular function are also carried out, so that the results of thrombolysis are expressed in terms of coronary patency and improvement in segmental kinetics. The results of these different sets of studies have proved to be similar with time. Urokinase, notably when injected intravenously, has a beneficial effect in the acute phase of MI when compared to the conventional treatment. The coronary reperfusion obtained with urokinase is favourable to the myocardium, and the sooner it occurs the better. This benefit is demonstrated by clinical, electrical, enzymatic and angiographic data. Thus, despite its cost, urokinase remains useful in the treatment of MI, notably because it is well tolerated.(ABSTRACT TRUNCATED AT 250 WORDS)Archives des maladies du coeur et des vaisseaux 06/1987; 80(5):591-7. · 0.40 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Seldom mentioned as a possible etiology of false positive stress tests (ST), the right atrial hypertrophy (RAH) may cause electrocardiographic aspects evocative of myocardial ischemia in the lower areas. The retrospective study concerns STs in 4 patients, staying in a cardiac rehabilitation center, following (D14) surgery for isolated type II IAC, Fallot's trilogy (n = 1), or dual valvular disease (tricuspid insufficiency and mitral stenosis, n = 1), and presenting all on basal ECG a RAH (group I). These STs are compared to that of a reference group operated (D14) from a IAC type II (isolated n = 3, associated to RVPA n = 1), but without signs of RAH on the rest ECG (group II). All other possible causes of electrically positive STs were eliminated from this study. In patients with RAH, the stress test is positive in the lower area (ST = -1.27 +/- 0.25 mm). In four other patients without HAD, the STest is negative in 3 cases and uncertain in one. These results do not seem to be linked to pre-operative hemodynamic data, nor to sonocardiographic data. Atrial repolarization alone seems to be the cause of these ECG alterations during stress, as demonstrated, in one patient, by sudden variations of the ST segment during a change from an atrial rhythm (AR) (with retrograde atrial depolarization) to a sinus rhythm (SR). These observations suggest the role of atrial repolarization in the origin of false positive stress tests in patients with RAH.Annales de Cardiologie et d Angéiologie 06/1987; 36(5):249-53. · 0.30 Impact Factor
Article: [Massive hemolysis and acute mitral insufficiency one year following mitral valve repair. Apropos of a case].[Show abstract] [Hide abstract]
ABSTRACT: The authors report a case of massive haemolytic anaemia with acute mitral valve regurgitation and left cardiac failure, which occurred one year after surgical reconstruction of the mitral valve for rupture of smaller leaflet chordae. Anaemia, mitral regurgitation and cardiac failure disappeared after mitral valve replacement, using a Carpentier Edwards No. 29 valve. Haemolytic anaemia following mitral valve reconstruction is exceptional. It seems to be due to the suture material lying in a turbulent regurgitation stream when mitral incompetence develops again.Archives des maladies du coeur et des vaisseaux 04/1987; 80(3):367-70. · 0.40 Impact Factor
Top Journals
Institutions
1983–1986
University of ToursTours, Centre, France
|
http://www.researchgate.net/researcher/38804713_M_Brochier
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
x:Class Directive
Configures XAML markup compilation to join partial classes between markup and code-behind. The code partial class is defined in a separate code file in a Common Language Specification (CLS) language, whereas the markup partial class is typically created by code generation during XAML compilation.
x:Class can only be specified on the root element of a XAML production. x:Class is invalid on any object that has a parent in the XAML production. For more information, see [MS-XAML] Section 4.3.1.6.
The namespace value may contain additional dots to organize related namespaces into name hierarchies, which is a common technique in .NET Framework programming. Only the last dot in a string of x:Class values is interpreted to separate namespace and classname. The class that is used as x:Class cannot be a nested class. Nested classes are not allowed because determining the meaning of dots for x:Class strings is ambiguous if nested classes are permitted.
In existing programming models that use x:Class, x:Class is optional in the sense that it is entirely valid to have a XAML page that has no code-behind. However, that capability interacts with the build actions as implemented by frameworks that use XAML. x:Class capability is also influenced by the roles that various classifications of XAML-specified content have in an application model and in the corresponding build actions. If your XAML declares event-handling attribute values or instantiates custom elements where the defining classes are in the code-behind class, you have to provide the x:Class directive reference (or x:Subclass) to the appropriate class for code-behind.
The value of the x:Class directive must be a string that specifies the fully qualified name of a class but without any assembly information (equivalent to the Type.FullName). For simple applications, you can omit CLR namespace information if the code-behind is also structured in that manner (code definition starts at the class level).
The code-behind file for a page or application definition must be within a code file that is included as part of the project that produces a compiled application and involves markup compilation. You must follow name rules for CLR classes. For more information, see Framework Design Guidelines. By default, the code-behind class must be public; however, you can define it at a different access level by using the x:ClassModifier Directive.
This interpretation of the x:Class attribute applies only to a CLR-based XAML implementation, in particular to .NET Framework XAML Services. Other XAML implementations that are not based on CLR and that do not use .NET Framework XAML Services might use a different resolution formula for connecting XAML markup and backing run-time code. For more information about more general interpretations of x:Class, see [MS-XAML].
At a certain level of architecture, the meaning of x:Class is undefined in .NET Framework XAML Services. This is because .NET Framework XAML Services does not specify the programming model by which XAML markup and backing code are connected. Additional uses of the x:Class directive might be implemented by specific frameworks that use programming models or application models to define how to connect XAML markup and CLR-based code-behind. Each framework can have its own build actions that enable some of the behavior or specific components that must be included in the build environment. Within a framework, build actions can also vary depending on the specific CLR language that is used for the code-behind.
In WPF applications and the WPF application model, x:Class can be declared as an attribute for any element that is the root of a XAML file and is being compiled (where the XAML is included in a WPF application project with Page build action), or for the Application root in the application definition of a compiled WPF application. Declaring x:Class on an element other than a page root or application root, or on a WPF XAML file that is not compiled, causes a compile-time error under the .NET Framework 3.0 and .NET Framework 3.5 WPF XAML compiler. For information about other aspects of x:Class handling in WPF, see Code-Behind and XAML in WPF.
For Windows Workflow Foundation, x:Class names the class of a custom activity composed entirely in XAML, or names the partial class of the XAML page for an activity designer with code-behind.
x:Class for Silverlight is documented separately. For more information, see XAML Namespace (x:) Language Features (Silverlight).
|
https://msdn.microsoft.com/en-us/library/vstudio/ms752309.aspx
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
I have what I suppose is a common problem: some managed bean has an action which adds some messages to the context:
FacesMessage fm = new FacesMessage("didn't work");
fm.setSeverity(FacesMessage.SEVERITY_ERROR);
FacesContext.getCurrentInstance().addMessage(null, fm);
return "some-outcome";
I use the by Ed Burns suggested custom exception handler to display an error page whenever a specific exception occurs. I also try do display an error page when ...
I get an IllegalStateException when redirecting from a preRenderView event. I have worked around it by just ingoring the exception. Is there a cleaner way to achieve the same result?
IllegalStateException
preRenderView
@Named
@RequestScoped
public class ...
I followed the following article to deal with ViewExpiredException -
Dealing Gracefully with ViewExpiredException in JSF2
It does what it is supposed to do. But I wanted to have some FacesMessage ...
I am using a phase listener to manage things like logon and conversation scope. In some situations I want to redirect. The most obvious is when the user is not logged ...
|
http://www.java2s.com/Questions_And_Answers/JSF/Exception/redirect.htm
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
resmgr_msg_again()
Process a message again in a resource manager
Synopsis:
#include <sys/resmgr.h> int resmgr_msg_again( resmgr_context_t *ctp, int rcvid);
Since:
BlackBerry 10.0.0
Arguments:
- ctp
- A pointer to a resmgr_context_t structure that the resource-manager library uses to pass context information between functions.
- rcvid
- The receive ID of the message that you want to process again.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The resmgr_msg_again() function reprocesses the message corresponding to the given rcvid. It does this by:
- calling MsgInfo() and MsgRead() to refresh the resmgr_context_t structure to be as it was as when you originally received the message
- calling MsgCurrent() to adjust your server threads' priority to that of the blocked client
- processing the message as if it had just arrived.
Returns:
- -1
- Failure.
- 0
- Success.
Errors:
- EFAULT
- A fault occurred when the kernel tried to access the buffers provided.
- ESRCH
- The thread indicated by rcvid doesn't exist, has had its connection detached, or isn't in either STATE_REPLY or STATE_NET_REPLY, or it isn't blocked on the connection associated with the rcvid.
- ESRVRFAULT
- The receive side of a message transfer encountered a memory fault accessing the receive/reply buffer.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/r/resmgr_msg_again.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Hi,
I'm the original author and project manager for Webware. I have read
your comparison to Webware at.
I'm writing to kindly ask you to remove this item from your list:
* Web Components. SkunkWeb encourages the componentization of your
web pages through caching and the like. It can also call components on
other SkunkWeb servers if you set it up to do so.
Webware promotes compententization through several means and was
designed to do so from the very start.
- Webware breaks down into discrete, focused packages.
- The app server is based on servlet factories, of which, you
can install your own.
- Servlets can internally forward messages to each other.
- Servlets can include each other's output.
- The app server supports XML-RPC and Pickle-RPC which facilitate
Pythonic communication between multiple app server instances.
As you can see, there is plenty of focus on components/objects in
Webware.
Regarding caching, the app server caches all the servlets which in turn
decide for themselves what to cache. For example, a method might do
something like:
def foo(self):
if self._foo is None:
self._foo = lotsOfWorkToComputeFoo()
return self._foo
So foo() only works hard the first time and on subsequent calls returns
the cached instance/string/whatever.
The app server doesn't directly get involved in your caching decisions,
as the semantics of your application really determine what can be
cached and for how long.
Also, MiddleKit caches objects extracted from databases and guarantees
their uniqueness (e.g., one distinct record never creates more than one
Python object in a given process).
There are probably several other interesting differences between our
products, but since I'm not familar with SkunkWeb, I can only correct
the misperceptions of Webware.
In any case, the implication that Webware is focused on components is
the item I really wanted to address the most. Probably the "caching"
Cheers,
-Chuck
|
http://sourceforge.net/p/webware/mailman/webware-discuss/thread/20020329055725.BGGZ29627.lakemtao01.cox.net@there/
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Celinio Fernandes wrote:Pretty interesting.
I already tested these EJB 3 plugins for Struts 2. Thanks for this new one that I will check out soon.
By the way, how come there are some Chinese characters in this example and in the sites you pointed at ?
amine cherif wrote:Hi,
Thank you lee for this article. However, as i am new struts developer, i have some general and specific questions:
- is there another way to use EJB3 in struts 2 (without using plugin) ?
- here is it necessary to define interceptors in order to use EJB3 ?
- @EJB
public void setDemoMethodInjectRemote(
DemoMethodInjectRemote demoMethodInjectRemote) {
this.demoMethodInjectRemote = demoMethodInjectRemote;
}
is it correct to make @EJB annotation before a method ?
- @Interceptors(DemoInterceptor1.class)
public class DemoAction extends ActionSupport {
is this means that all DemoAction methods will be invoked by DemoInterceptor1 interceptor ?
Thank you in advance for the responses
|
http://www.coderanch.com/t/496362/Struts/Intergration-Struts-EJB-struts-ejb
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
As enterprise computing enters the brave new world of service-oriented architecture (SOA), it becomes more and more important to seek new ways to describe, publish, and discover your services. The Web services-based approach does not offer automatic service discovery and often is too complex. New, lightweight development frameworks warrant new, lightweight approaches to service publishing.
Over the past several years, the Spring Framework has emerged as the de facto standard for developing simple, flexible, easy-to-configure J2EE applications. At the heart of Spring lies the Inversion of Control (IoC) principle. According to IoC, an application must be developed as a set of simple JavaBeans (or plain-old Java objects—POJOs), with a a lightweight IoC container wiring them together and setting the dependencies.
In Spring's case, the container is configured via a set of bean definitions, typically captured in XML context files:
<bean id="MyServiceBean" class="mypackage.MyServiceImpl"> <property name="otherService" ref="OtherServiceBean"/> </bean>
Then, when the client code needs to use
MyService, as defined in this Spring context, you do something like:
MyServiceInterface service = (MyServiceInterface)context.getBean("MyServiceBean"); service.doSomething();
In addition to IoC, Spring provides literally hundreds of other services, coding conveniences, and "hooks" into standard APIs that ease the development of a modern Java server-side application. Whether your application uses heavy-lifting J2EE APIs such as Enterprise JavaBeans (EJB), Java Message Service (JMS), or Java Management Extensions (JMX), or utilizes one of the popular Model-View-Controller frameworks for building a Web interface, Spring offers you something to simplify your development efforts.
As the Spring Framework matures, more and more people are using it as a foundation for their large-scale enterprise projects. Spring has passed the test of development scalability and can be used as sort of "component glue" to put together complex distributed systems.
Any nontrivial enterprise application combines many diverse components: gateways to legacy and enterprise resource planning systems, third-party systems, Web/presentation/persistence tiers, etc. It is not unusual for an e-commerce site that began as a simple Web application to eventually grow to contain hundreds of subapplications and subsystems, and face a situation where the complexity starts inhibiting further growth. Often the solution is to break the monolithic application into a few coarsely-grained services and release them on the network.
Whether your application was designed as an integration point for dispersed services or has morphed into one, the task of managing all distributed components and their configuration quickly becomes a time-consuming and expensive one. If your application components are developed using Spring, you can use Spring remoting to expose your Spring-managed beans to remote clients via a multitude of protocols. Using Spring, making your application distributed is as simple as making a few changes in your Spring context files.
The simplest (and most recommended) approach to Java-to-Java remoting in Spring is through HTTP remoting. For example, after registering your Spring dispatcher servlet in
web.xml, the following context piece exposes
MyService for public consumption:
<bean name="/MyRemoteService" class="org.springframework.remoting.httpinvoker.HttpInvokerServiceExporter"> <property name="service" ref="MyServiceBean"/> <property name="serviceInterface" value="mypackage.MyServiceInterface"/> </bean>
As you can see, the actual service is injected into this bean definition and thus made available for the remote calls.
On the client, the context definition reads:
<bean id="MyServiceBean" class="org.springframework.remoting.httpinvoker.HttpInvokerProxyFactoryBean"> <property name="serviceUrl" value="" /> <property name="serviceInterface" value="mypackage.MyServiceInterface" /> </bean>
By the magic of Spring, the client-side code (obtain the service from the context and invoke its methods) doesn't change, and the remote method invocation occurs just as the local one did before.
In addition to HTTP remoting, Spring supports several other remoting protocols out of the box, including other HTTP-based solutions (Web services, Hessian, and Burlap) and heavier ones like remote method invocation (RMI).
Configure and deploy URL-based remoting services
Deploying your services via HTTP-based remoting has several distinct advantages, one of which is that, compared with straight RMI or EJB-based solutions, you have far fewer configuration issues to worry about. Anyone who has tried to work through a nontrivial JNDI (Java Native and Directory Interface) configuration (several load-balanced or clustered J2EE containers from different vendors or even different versions of the same container) can attest to that.
If you base your distributed components on Spring remoting, defining a service on your network is simple. All you need to know is the service URL pointing to the server, port, Web application, context path, and name of the Spring bean implementing this service.
URLs are plain-text strings, and plain text is your friend. At the same time, defining a service via a URL makes the definition somewhat brittle. All the individual portions of the URL listed in the previous paragraph are subject to change, and change they will. Network topography (and network administrators) change, load-balanced server farms replace servers, Web applications deploy onto different containers under different names, holes are punched and closed in inter-network firewalls, and so on.
In addition, those brittle URLs must be stored in Spring context files on every client that could possibly access the service. When they change, all the clients must be updated. And one more thing: As your newly-forged service progresses from development to staging to production, the URL pointing to the service must change to reflect the environment the service is in.
Finally we arrive at the problem definition: Spring's ability to easily expose individual Spring-managed beans as remotely-accessible services is great. It would be even better if all we needed to define (or access) a service was the service name, with all the details about service location hidden from the clients.
Cache service descriptions for auto-discovery and failover
The obvious solution to this problem would employ some kind of naming service to provide dynamic, real-time (or almost real-time) resolution of service name to service location(s). Indeed, I once built such a system using the JmDNS library to register Spring remoting services in the Zeroconf namespace (Zero Configuration Networking, technology also known as Apple Rendezvous).
The problem with the DNS-based approach is that updates to the service definitions are never real-time or transactional. A failed server still appears in the service list until all kinds of timeouts and "keep-alive" games are played. What we need is the ability to quickly publish and alter the lists of URLs implementing our services and make those changes happen simultaneously (read: transactionally) across our entire network.
The systems that satisfy these requirements are available. They are various implementations of a distributed cache. The easiest way to visualize a cache for a Java programmer is to think of it as an implementation of the
java.util.Map interface. You can put something in there using a key, and then you can get something out using the same key later. A distributed cache ensures that the same key-value mapping will exist in all the copies of the same
Map on every server participating in the cache and will update the caches everywhere in a lockstep.
A good implementation of a distributed cache solves our problem. We associate a service name with one or more URLs pointing to the place(s) on the network where this service is implemented. Then, we store the name=(list of URLS) associations in a distributed cache and update them accordingly as the network situation changes (servers come online and are removed, servers crash, etc.). The clients to our services participate in the distributed cache as well and, as such, always have access to the current information about the individual service implementations' locations.
As an added bonus, we can introduce a simple load-balancing/failover solution in this scenario. If a client knows that a certain service is associated with several service URLs, it can pick one of them at random and provide crude but effective load-balancing across the several servers serving those URLs. And, if a remote call fails, a client can simply mark that URL as "bad" and pick the next one, thus providing failover as well. Because the list of service URLs is stored in the distributed cache, the fact that Server A went bad is communicated to the other clients as well.
Distributed caches find use in conventional J2EE applications that provide the backbone for server clustering. For example, if you have a distributed, clustered Web application, a distributed cache will provide session replication among your cluster's members. Though highly reliable, J2EE clustering is a serious bottleneck. Session data change quickly, and the overhead of updating all the cluster members and failing over in case of failure is great. Clustered Web applications with session replication are typically several times less scalable than share-nothing load-balancer-based solutions.
Distributed caching works for our scenario due to the small amount of data being cached. Instead of thousands of session objects typical for distributed session replication, we have only a small list of services and the URLs implementing them. In addition, updates to our list happen infrequently. A distributed cache with such a small list may scale well to numerous member servers and clients.
For the rest of this article, let's look at the real-life implementation of our "service description caching algorithm."
Use Spring and JBoss Cache for service description caching
JBoss Application Server is probably the most successful (and the most controversial) open source J2EE project today. Love it or hate it, JBoss Application Server occupies a well-deserved spot on the list of top deployed servers, and its modular nature makes it very developer-friendly.
The JBoss distribution packs many ready-to-go services. One of interest to us is JBoss Cache. This cache implementation provides high-performance caching of arbitrary Java objects both locally and across the network. JBoss Cache has many configuration options and features, and I encourage you to learn more about it to see how it may fit into your next project.
The features that make JBoss Cache attractive for our project are:
- It provides high-quality, transactional replication of Java objects
- It can run as part of JBoss server or standalone
- It is already available "inside" JBoss as an MBean (managed bean)
- It can use either UDP multicast or "normal" TCP connections
The network foundation for JBoss Cache is JGroups library. JGroups provides network communication between cluster members and can work over either UDP multicast (for dynamic auto-discovery of cache members) or over TCP/IP (for working off a fixed list of server names/addresses).
For this article, I show how to use JBoss Cache to store the definitions of our services and provide dynamic, automatic service discovery.
Note: See Resources to download a zipped file containing an Eclipse project for a Web application that exposes a service via Spring remoting and uses JBoss Cache to share the service descriptions with a client application (set of JUnit tests). All of the code discussed below can be found there.
To begin, we introduce a custom class,
AutoDiscoveredServiceExporter that extends the Spring standard
HttpInvokerServiceExporter to expose our
TestService for remoting:
<bean name="/TestService" class="app.service.AutoDiscoveredServiceExporter"> <property name="service" ref="TestService"/> <property name="serviceInterface" value="app.service.TestServiceInterface"/> </bean>
There is really nothing worth mentioning in this class. We basically use it to mark the Spring remoting services as exposed in our special way.
Next, the server-side cache configuration. JBoss already comes with a cache implementation, and we can use the Spring built-in JMX proxy to bring the cache into the Spring context:
<bean id="CustomTreeCacheMBean" class="org.springframework.jmx.access.MBeanProxyFactoryBean"> <property name="objectName"> <value>jboss.cache:service=CustomTreeCache</value> </property> <property name="proxyInterface"> <value>org.jboss.cache.TreeCacheMBean</value> </property> </bean>
This creates a
CustomTreeCacheMBean in the server-side Spring context. Through the magic of auto-proxying, this bean implements the methods in the
org.jboss.cache.TreeCacheMBean interface. For this to deploy on the JBoss server, just drop the provided custom-cache-service.xml file into your server's deploy directory.
To simplify our code, we introduce a simple
CacheServiceInterface:
|
http://www.javaworld.com/article/2072161/web-app-frameworks/use-a-distributed-cache-to-cluster-your-spring-remoting-services.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Base class for wrapped opaque objects. More...
#include <Teuchos_OpaqueWrapper.hpp>
Base class for wrapped opaque objects.
If you want to create an RCP to an opaque handle (an instance of a type like MPI_Comm), use the opaqueWrapper() nonmember template function. If that opaque handle needs to be freed after all references to it go away, then supply a "free" function to opaqueWrapper(). The type returned by opaqueWrapper() is
RCP<OpaqueWrapper<T> >, an RCP (reference-counted "smart" pointer) to a wrapper of the opaque handle type T. Users are not allowed to construct an OpaqueWrapper object explicitly. You must use the opaqueWrapper() nonmember function to do so.
In order to understand this documentation, you must first have learned how to use RCP (Teuchos' reference-counted "smart" pointer class) to manage dynamically allocated memory and other resources. It also helps to be familiar with MPI (the Message Passing Interface for distributed-memory parallel programming), but this is not required.
Many different software libraries use the opaque handle (a.k.a. opaque object) idiom to hide the internals of a data structure from users. This standard technique allows users to treat an instance of a data structure as a handle. Users may pass the handle around as if it were a simple value type (like int), and must call nonmember functions in order to create, operate on, use, or destroy instances of the data structure. The MPI (Message Passing Interface) standard is full of examples of the opaque handle idiom, including MPI_Comm (for communicators), MPI_Datatype (for standard and custom data types), and MPI_Op (for standard and custom reduction operations).
In general, opaque handles (corresponding to the Opaque template parameter) must be assignable. This means that copy construction and assignment (operator=) must be syntactically correct for instances of Opaque. This is certainly true of MPI's opaque handle types.
Opaque handles are a useful technique, but they interfere with correct use of reference-counted "smart" pointer types such as Teuchos' RCP or std::shared_ptr. We will explain below why this is the case. The OpaqueWrapper base class allows opaque handles to be wrapped by a real object, whose address you can take. This is needed in order to wrap an opaque object in a RCP, for example.
The OpaqueWrapper class was motivated by MPI's common use of the opaque handle idiom. For MPI, passing MPI_Comm, MPI_Datatype, and MPI_Op objects around by handles hides implementation details from the user. Handles also make it easier to access MPI functionality from Fortran, so that C, C++, and Fortran can all share the same handle mechanism. In fact, some MPI implementations (such as MPICH, at least historically if not currently) simply implement these handles all as integers. (As the MPI standard's advice to implementers suggests, such an implementation would likely maintain a table for each MPI process that maps the integer value to a pointer to the corresponding object.) For example, MPI_Comm might be a typedef to int, and MPI_COMM_WORLD might be a C preprocessor macro for a literal integer value:
typedef int MPI_Comm; #define MPI_COMM_WORLD 42
In this case, the expression
rcp(&MPI_COMM_WORLD) would not even compile, since one cannot take the address of an integer literal such as 42. (Remember that preprocessor macros get replaced with their values before the C++ compiler does its work.) To make this expression compile, one might try the following:
// THIS FUNCTION IS WRONG. IT MAY SEGFAULT. Teuchos::RCP<MPI_Comm> getMpiCommPtr() { MPI_Comm comm = MPI_COMM_WORLD; // WRONG!!! comm is a stack variable! return Teuchos::rcp (&comm, false); }
Using the returned communicator would result in undefined behavior, which in practice might be a segfault, memory corruption, or MPI getting severely confused. This is because the stack variable
comm, which may be just an integer, disappears at the end of the function. Its address would no longer point to valid memory after the function returns.
The following code is syntactically correct, but may leak memory:
// THIS CODE LEAKS MEMORY FOR GENERAL MPI_Comm OBJECTS. Teuchos::RCP<MPI_Comm> getMpiCommPtr (MPI_Comm comm) { MPI_Comm *pComm = new MPI_Comm (comm); // Works for comm==MPI_COMM_WORLD or MPI_COMM_SELF; // leaks memory for user-created MPI_Comm objects. return Teuchos::rcp (pComm); }
The above implementation of getMpiCommPtr() is correct only for the standard MPI_Comm objects provided by MPI, like MPI_COMM_WORLD and MPI_COMM_SELF. It is not correct, and in fact may leak memory, for custom MPI_Comm objects that the user creates by calling functions like MPI_Comm_split(). This is because user-created MPI_Comm objects must be freed by MPI_Comm_free(). Other kinds of opaque objects, like MPI_Datatype and MPI_Op, have their own free functions. Thus, even if opaque handles have the type integer, they really behave like pointers or references. Some of them can and should be freed at the end of their useful lives; others must not. (Compare std::ostream; std::cout should never be closed by typical user code, but an output file should be closed.)
We fix this problem by providing the OpaqueWrapper template base class and the opaqueWrapper() nonmember template function. Use this function to wrap an opaque handle (like an MPI_Comm) in an RCP. This ensures that the RCP does the right thing in case the handle must be freed. For example, to wrap MPI_COMM_WORLD in a RCP, just do this:
RCP<OpaqueWrapper<MPI_Comm> > comm = opaqueWrapper (MPI_COMM_WORLD);
If you instead want to create a custom MPI_Comm using a function like MPI_Comm_split(), then you may wrap it in an RCP as follows (please see discussion later about MPI_Comm_free()):
MPI_Comm rawComm; // We omit all arguments but the last of MPI_Comm_split, for clarity. int errCode = MPI_Comm_split (..., &rawComm); if (errCode != MPI_SUCCESS) { // ... Handle the error ... } RCP<OpaqueWrapper<MPI_Comm> > comm = opaqueWrapper (rawComm, MPI_Comm_free);
The optional second argument to opaqueWrapper() is a "free" function. It has type OpaqueFree which is a template parameter. If the free function is provided, then when the RCP's reference count goes to zero, that function is called to "free" the handle. If opaqueFree is a free function, then the following must be syntactically valid, where
opaque has type
Opaque:
opaqueFree (&opaque);
The function's return value, if any, is ignored. Furthermore, the OpaqueFree type must be copy constructible. (A function pointer is trivally copy constructible.)
Users are responsible for knowing whether to provide a free function to opaqueWrapper(). In this case, because we created an MPI_Comm dynamically using a communicator "constructor" function, the MPI_Comm must be "freed" after use. RCP will automatically call the "free" function once the reference count of
comm reaches zero.
commwill go to zero before MPI_Finalize is called. This is because it's not valid to call MPI_Comm_free after MPI_Finalize has been called. The details::safeCommFree function checks whether MPI_Finalize has been called (via MPI_Finalized) before calling MPI_Comm_free; you may use this function as the free function if you are concerned about this.
Definition at line 240 of file Teuchos_OpaqueWrapper.hpp.
Constructor that accepts and wraps a raw handle.
Users typically never have to invoke the constructor explicitly. The opaqueWrapper() nonmember template function does this for them.
Definition at line 246 of file Teuchos_OpaqueWrapper.hpp.
Implicit type conversion from wrapper to raw handle.
Users typically never have to convert directly from an OpaqueWrapper to the raw handle that it wraps. For example, if you have an
RCP<OpaqueHandle<T> >, just deferencing the RCP will return the raw handle via this implicit type conversion operator:
// We omit the right-hand side of this assignment, for simplicity. RCP<OpaqueWrapper<T> > wrapped = ...; // RCP's operator* returns OpaqueWrapper<T>&. // In turn, the operator below automatically converts to T. T raw = *wrapped;
Definition at line 263 of file Teuchos_OpaqueWrapper.hpp.
Explicit type conversion from wrapper to raw handle.
Users typically never have to convert directly from an OpaqueWrapper to the raw handle that it wraps. However, in case they do, we provide this operator.
Definition at line 270 of file Teuchos_OpaqueWrapper.hpp.
Create a new
OpaqueWrapper object without a free function.
See the documentation of OpaqueWrapper for a detailed explanation of why and how to use this function.
Definition at line 354 of file Teuchos_OpaqueWrapper.hpp.
Create a new
OpaqueWrapper object with a free function.
See the documentation of OpaqueWrapper for a detailed explanation of why and how to use this function.
Definition at line 370 of file Teuchos_OpaqueWrapper.hpp.
The actual handle.
This is protected and not private so that OpaqueWrapperWithFree can access it. In general, one should avoid using protected data, but it would be silly to add member functions just for this simple use case.
Definition at line 279 of file Teuchos_OpaqueWrapper.hpp.
|
http://trilinos.sandia.gov/packages/docs/dev/packages/teuchos/browser/doc/html/classTeuchos_1_1OpaqueWrapper.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more! news around the markets:
Asian Markets
Asian shares were mostly higher following the dovish comments
from Bernanke and the comments from Chinese Premier Li. The
Japanese Nikkei 225 Index rose 0.39 percent and the Topix Index
fell 0.04 percent. In Hong Kong, the Hang Seng Index rose 2.55
percent and the Shanghai Composite Index rose 3.23 percent in
China. Also, the Korean Kospi gained 2.93 percent and Australian
shares rose 1.31 percent.
European Markets
European shares were also higher overnight on the back of the
bullish sentiment from around the world. The Spanish Ibex Index
rose 0.11 percent and the Italian FTSE MIB Index gained 0.58
percent. Meanwhile, the German DAX rose 1.03 percent and the French
CAC 40 Index gained 0.72 percent while U.K. shares rose 0.77
percent.
Commodities
Commodities were mostly higher, especially metals, after
Bernanke talked down the dollar. WTI Crude futures rose 0.17
percent to $106.70 per barrel and Brent Crude futures gained 0.04
percent to $108.55 per barrel. Copper futures rose 3.12 percent to
$318.75 per pound. Gold was higher and silver futures gained 4.41
percent to $20.01 per ounce.
Currencies
Currency markets showed broad dollar weakness overnight however
moves in most major dollar pairs were off of extreme levels. The
EUR/USD was higher at 1.3041 after touching nearly 1.32 and the
dollar fell against the yen to 99.39 after falling as far as 98.60.
Overall, the Dollar Index fell 1.13 percent on weakness against the
Swiss franc, the Canadian dollar, the pound, the euro, and the
yen.
Earnings Reported Yesterday
Key companies that reported earnings Tuesday include:
Pre-Market Movers
Stocks moving in the pre-market included:
Earnings
Notable companies expected to report earnings Thursday
include:
On the economics calendar Thursday, initial jobless claims and
import and export prices are due out followed by the Bloomberg
Consumer Comfort Index. Also, Fed Governor Daniel Tarullo is
expected to speak and the Treasury is set to auction 30-year bonds
and give its budget statement. Overnight, the Spanish CPI report
and Eurozone Industrial Production data?
|
http://www.nasdaq.com/article/benzinga-market-primer-thursday-july-11-futures-rise-after-bernanke-speaks-cm258727
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
note skx <p>I don't maintain a database handle throughout the transaction - instead I use the connection object to make notes as the SMTP transaction is completed.</p> <p>e.g.</p> <code> sub hook_helo { my ( $self, $transaction, $host ) = @_; # # Make sure helo includes a domain # if ( $host !~ /\./ ) { $self->log( LOGWARN, "HELO $host doesn't contain a period." ); $transaction->notes( "reject", 1 ); $transaction->notes( "reason", "invalid helo" ); } return DECLINED; } </code> <p>Then I have a series of plugins which do different things at the last step, either forward the message or reject it, but archive a searchable copy for the recipients benefit. Here's a simplified version of the reject + archive plugin:</p> <code> sub hook_queue { my ( $self, $transaction ) = @_; # # We only log mails which have been rejected. # if ( 0 == ( $transaction->notes("reject") || 0 ) ) { return DECLINED; } # connect to DB # archive message # disconnect return ( DECLINED, "Rejected this is spam: " . $transaction-Notes("reason" ) ); } </code> <p>(Actually this is a polite fiction. I actually archive messages to local disk, if they were to be rejected, then later rsync them to a central location - and import them to MySQL there.</p> <!-- Node text goes above. Div tags should contain sig only --> <div class="pmsig"><div class="pmsig-194370"> <a href="">Steve</a><br/> -- <br/> </div></div> 730642 730920
|
http://www.perlmonks.org/?displaytype=xml;node_id=731028
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Hello,
I've used QSysInfo::symbianVersion() function in Nokia C6-01 device (at the RDA), and I got SV_9_4 in return (instead of SV_SF_3 , for Symbian 3).
I've made sure that the device is really running Symbian 3 (it does).
Does anyone have an idea why do I get that return value, and what should I do in order to get the correct value?
Thanks,
Keren
|
http://developer.nokia.com/Community/Discussion/showthread.php/216270-QSysInfo-symbianVersion()-wrong-value-for-Nokia-C6-01
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Hello Henning, Sunday, October 22, 2006, 5:48:11 PM, you wrote: > I don't see the benefit of allowing imports anywhere at top-level. it is useful to move together imports and related code. say: #if HUGS import Hugs.Base addInt = hugsAddInt #elseif GHC import GHC.Base addInt = ghcAddInt #endif currently we are forced to make separate sections for import and use: #if HUGS import Hugs.Base #elseif GHC import GHC.Base #endif #if HUGS addInt = hugsAddInt #elseif GHC addInt = ghcAddInt #endif just another example: -- higher-level stuff: openFile = ... -- medium-level-stuff createFD = .. -- low-level stuff import System,FD _create = ... System.FD.create (i don't propose subj. i just know pros and contras for it) -- Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com
|
http://www.haskell.org/pipermail/haskell-prime/2006-October/001775.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Hi Diego, On Sat, 2007-01-06 at 16:01 +0100, Diego Biurrun wrote: [...] > > --- tests/Makefile (revision 7406) > > +++ tests/Makefile (working copy) > > @@ -16,7 +16,14 @@ > > > > +ifeq ($(CONFIG_SWSCALER),yes) > > +all fulltest test: > > + @echo > > + @echo "Cannot perform tests if libswscaler is enabled" > > Nit: Missing period at the end. > > Also, there is no such thing as libswscaleR. The directory is called > libswscale, the program software scaler, the configure option swscaler. > We should settle on a name someday. Ok; maybe the right thing to write here should be "... if the software scaler is enabled", since "libswscale is enabled" does not make too much sense :). > > + @echo > > +else > > all fulltest test: codectest libavtest test-server > > +endif > > Do both codectest and libavtest fail? Yes, they both fail. > What if I call codectest or libavtest directly? > I think your patch will not help then... You are right; I tried to make the patch as simple as possible by only considering the most common case (make test). I'll rewrite the patch protecting all the "codectest", "libavtest", and "test-server" targets with "ifeq ($(CONFIG_SWSCALER),yes)". Thanks, Luca
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-January/027122.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
This document is also available as a non-normative single HTML file.
Copyright © 2009 [OWL]. This document describes the ontology and gives details of each term that it contains.
The normative definition of the ontology terms is generated automatically from the OWL of a possible future W3C Recommendation, following the Second Public working draft published on April 15th 2008 (see the changes since the previous publication). As such the UWA WG considers the present draft sufficiently stable and mature. Nonetheless, it is expected to be complementary to the work being done on other aspects of the Delivery Context, namely Personalization. The Working Group maintains an Frequently Asked Questions document about the DCO, which may aid the reader.
Comments on this Last Call Working Draft are accepted until 7 July 2009, please send comments to the public public-uwa@w3.org mailing list (archived at). Overview
1.2 Motivation
1.3 Understanding the Delivery Context Ontology
1.4 Scope
2 Reading the Recommendation
2.1 Normative and Informative Parts
2.2 Normative Language for Conformance Requirements
2.3 CURIE Prefix Bindings
2.4 Reading Term Descriptions
2.4.1 Class Description Specific Fields
2.4.2 Property Description Specific Fields
2.4.3 Instance Description Specific Fields
2.5 Class Disjointness
2.6 Normative Instance Disjointness
3 The Ontology
3.1 Definition
3.2 URI
3.3 Namespaces
3.4 Term Groups
3.4.1 Common
3.4.3 Hardware
3.4.4 Java
3.4.5 Location
3.4.2 Main
3.4.6 Network
3.4.7 Push
3.4.8 Software
3.4.9 Web Browsing
3.5 Measurement Units Representation
3.6 Instances
4 Conformance
A Class Hierarchy Summary
B Property Hierarchy Summary
C Summary of Changes since the Second Public Working Draft
C.1 Deleted Classes
C.2 Added Classes
C.3 Deleted Properties
C.4 Added Properties
D Ontology Resources (Non Normative)
E Acknowledgements (Non Normative)
This section is informative.
The DeliveryContext Ontology provides a formal model of the characteristics of the environment in which devices interact with the Web or other services. The Delivery Context includes the characteristics of the Device, the software used to access the service and the Network providing the connection among others.
The Ontology captures the Context of Use in which a user is interacting with a particular computing platform in a given physical environment in order to achieve an interactive task.
The Ontology is formally specified in the Web Ontology Language [OWL]. It defines a normative vocabulary of terms (classes, properties and instances) that models the different Properties, Aspects and Components (Aspect instances) of a Delivery Context.
The Delivery Context is an important source of information that can be exploited to create context-aware applications, thus providing a compelling user experience. Particularly, it can be used to adapt web content & applications to make them useable on a wide range of different devices with different capabilities.
The Ontology represents a normative, common understanding about the Delivery Context. As such it can be used as a normative reference to create specific Vocabularies, while at the same time enabling the interoperability between them.
The Delivery Context Ontology itself constitutes a vocabulary of terms and can be used in conjunction with generic APIs for retrieving Context Properties, such as [DCCI].
It is recommended to be familiar with RDF, OWL and ontologies in general before reading this specification. The [RDF-Primer] and the [OWL-Guide] are two documents which might be helpful for this purpose.
The model represented by the ontology is essentially hierarchical. At the top of the hierarchy is the
DeliveryContext class, which gives access to the current
UserAgent,
NetworkBearer,
Device,
RuntimeEnvironment and pysical
Environment, which are the essential elements of any Delivery Context. Each of these elements are represented by classes which have different properties that model their specific characteristics and Components.
There are a number of generic properties which domain and / or range has been left deliberately open in order to maximize reuse and genericity. For example, the Object Property
common:supports, is devoted to convey what it is supported (formats, fonts, features, etc.) by any Delivery Context Entity. As such it can be used with both entities that are currently modelled within the Ontology or new entities that might appear in the future (Ontology extensions).
The UWA Working Group intends to publish tutorial materials related to this specification, containing at least:
The Delivery Context Ontology is aimed at providing a formal and universally accepted model of the Delivery Context. As such it is not intended to model properties which can be application or domain dependent. For instance, there are a number of properties that can be derived from the properties explicitly modelled by the Ontology.
On the other hand it is noteworthy to remark that certain facets of the Context of Use are not covered by this specification. Specific examples are the User Context or a full description of the physical environment (temperature, noise, light ...). It is expected that such descriptions will be included in future versions of this specification.
This section is normative.
The normative and informative parts of this specification are identified by use of labels within various sections. Generally, everything in the specification is considered to be normative, apart from the examples.
Individual conformance requirements or testable statements are identified by the use of specific key words. In particular, the key words must, must not, required, shall, shall not, should, should not, recommended, may, and optional in this specification are to be interpreted as described in [IETF RFC 2119].
This specification makes use of [CURIEs] as an abbreviated syntax for expressing URIs. The following CURIE prefix bindings are defined:
Each term (class, property, normative instance) is documented in an specific section within this document. In addition there are cross references that link related terms between them.
A term description is composed by the following fields:
rdfs:label) and full text (
rdfs:comment).
It contains a list of normative references that describe precisely the intended meaning of the ontology term.
It is a list of informative references that clarifies the meaning of the ontology term.
The fields used in a class description are described as follows:
This field contains a list of all those ontology classes for which the described class is
a subclass (
rdfs:subClassOf axiom).
This field contains a list of all those ontology classes defined as a subclass (
rdfs:subClassOf axiom) of the described class.
This field contains a list of all those ontology properties whose domain includes the class.
This field contains a list of all those ontology properties whose range includes the class.
This field contains the property restrictions for the class. Property restrictions are documented in accordance with the OWL Abstract Syntax.
This field is a list of CURIEs with the normative instances (if any) for the class.
This field indicates the property type and characteristics.
This field indicates the property domain. If it does not appear it means that the property domain can be any class (
owl:thing).
This field denotes the property range. The range of a Datatype property is expressed in terms of an XML Schema datatype [XMLSCHEMA-2] or as a datarange, which enumerates the list of allowed property values. The range of an Object property is a list of classes. When the range does not appear it means that it can be anything.
This field contains a list of all those ontology properties for which the described property is
a subproperty (
rdfs:subPropertyOf axiom).
This field contains a list of all those ontology properties defined as a subproperty of (
rdfs:subPropertyOf axiom) the described property.
This field is a list that represents the class membership of the instance.
This field is composed by a list that represents the property values (represented as RDF Typed Literals) that must have the described instance.
The disjointness between classes is detailed in specific sections. For compactness reasons the syntax used is the
DisjointClasses axiom which allows to define sets of classes which are pairwise disjoint.
All the normative instances defined by this specification are pairwise disjoint.
This section is normative.
The ontology is formally specified in OWL [OWL]. The documentation of the different ontology terms has been automatically generated from the OWL file.
The ontology conforms to the OWL-DL expressivity. This allows it to be used within appropriately written reasoning systems.
This section is normative.
The ontology URI is .
This section is normative.
The table below describes the different namespace URIs defined by the Ontology. Namespaces allow to create groups of terms.
This section is normative.
For modularity reasons, the Ontology has been splitted into the following groups of terms:
This section is informative.
A number of properties in the Ontology represent physical magnitudes. The same magnitude, for instance "length", can be measured using different units ("meters", "inches", "feet", "milimeters", etc), yielding to different numeric values depending on the unit used. However, from a conceptual point of view, all of these values are (or should be) actually equivalent.
RDF nor OWL supports to tag a literal value with its measurement unit. In other words, RDF literal nodes can represent discrete numeric values, such as "10" or "9.81", but a literal node does not capture the unit used to express the value. On the other hand, the problem of ontological modelling of measurement units is still open to debate and goes beyond the scope of this specification. For example, the [MUO Ontology] is a recent proposal for modelling measurement units using OWL.
In this version of the Ontology, each Property that represents a magnitude has to be expressed normatively in a single unit. This solution is simple and at the same time maximizes interoperability. However, it is also recognized that this approach might not be suitable in all scenarios, due to both practical (some scales are more conveniently captured using certain units) and cultural reasons. Therefore, it is expected that, once a Measurement Units Ontology is widely adopted (and standardized), future versions of this specification will allow to represent magnitudes using different units.
This section is informative.
Only a minimum set of instances has been declared as normative in this specification. This has been done for simplicity and maintainability reasons. On the other hand it is important to remark that Vocabularies based on the Ontology may define their own normative instances for representing specific values within their respective domains. For example, the [DDR Core Vocabulary] enumerates different instances for image formats or markup languages.
Nonetheless, the main OWL file (distributed as a companion resource), contains a number of utility instances that can come in handy for Ontology users or implementors. Instances for character sets, MIME Types, network technologies and formats, among others, are included.
In addition, example instances (under the namespace) are also distributed with the only purpose of illustrating how the Ontology works in practice.
This section is normative.
A conforming implementation of this Recommendation must implement all the normative sections of this document.
This section is informative.
To improve the readability of the specification a Class Hierarchy Summary has been automatically generated from the ontology itself.
This section is informative.
A graphical representation of the Datatype Property Hierarchy and the Object Property Hierarchy has been automatically generated from the Ontology itself.
This section is informative.
A summary of changes has been created in order to enumerate:
OWL File (includes utility and example instances)
The editors wish to acknowledge the contributions of members of the UWA WG.
The editors wish to acknowledge the specific written contributions of:
|
http://www.w3.org/2007/uwa/editors-drafts/DeliveryContextOntology/LastCallWD-April2009/
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Design By Contract or DBC defines that methods should have defined input and output verifications. Therefore, you can be sure you are always working with a usable set of data in all methods and everything is behaving as expected. If not, exceptions or errors should be returned and handled from the methods. To read more on DBC read the wikipedia page here.
In our example here, we are working with input parameters that may possibly be null. As a result a NullReferenceException would be thrown from this method because we never verify that we have an instance. During the end of the method, we don’t ensure that we are returning a valid decimal to the consumer of this method and may introduce methods elsewhere.
1: public class CashRegister
2: {
3: public decimal TotalOrder(IEnumerable<Product> products, Customer customer)
4: {
5: decimal orderTotal = products.Sum(product => product.Price);
6:
7: customer.Balance += orderTotal;
8:
9: return orderTotal;
10: }
11: }
The changes we can make here to introduce DBC checks is pretty easy. First we will assert that we don’t have a null customer, check that we have at least one product to total. Before we return the order total we will ensure that we have a valid amount for the order total. If any of these checks fail in this example we should throw targeted exceptions that detail exactly what happened and fail gracefully rather than throw an obscure NullReferenceException.
It seems as if there is some DBC framework methods and exceptions in the Microsoft.Contracts namespace that was introduced with .net framework 3.5. I personally haven’t played with these yet, but they may be worth looking at. This is the only thing I could find on msdn about the namespace.
1: public class CashRegister
2: {
3: public decimal TotalOrder(IEnumerable<Product> products, Customer customer)
4: {
5: if (customer == null)
6: throw new ArgumentNullException("customer", "Customer cannot be null");
7: if (products.Count() == 0)
8: throw new ArgumentException("Must have at least one product to total", "products");
9:
10: decimal orderTotal = products.Sum(product => product.Price);
11:
12: customer.Balance += orderTotal;
13:
14: if (orderTotal == 0)
15: throw new ArgumentOutOfRangeException("orderTotal", "Order Total should not be zero");
16:
17: return orderTotal;
18: }
19: }
It does add more code to the method for validation checks and you can go overboard with DBC, but I think in most scenarios it is a worthwhile endeavor to catch sticky situations. It really stinks to chase after a NullReferenceException without detailed information.
This is part of the 31 Days of Refactoring series. For a full list of Refactorings please see the original introductory post.
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
|
http://lostechies.com/seanchambers/2009/08/25/refactoring-day-25-introduce-design-by-contract-checks/
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
09 December 2009 17:15 [Source: ICIS news]
By Nigel Davis
?xml:namespace>
The energy giant’s latest world outlook for energy suggests that demand will grow by 1.2% a year over the 25 years to 2030. That’s an increase of 35%.
More importantly, energy demand is forecast to remain relatively flat in the OECD (Organisation for Economic Co-operation and Development) nations but to grow 65% in the developing world. And that takes into account greater efficiencies globally as well as economic growth in the OECD of 50% over the period.
Power generation will demand most energy: 55% of the total growth in demand to 2030, by which time it will represent 40% of total primary energy demand.
Fossil fuels will continue to provide most of the fuel for power generation but a focus on carbon capture will drive production from coal to natural gas and open further opportunities for wind and nuclear power generation.
“In our energy outlook, we see many hopeful things – economic recovery and growth, improved living standards and a reduction in poverty, and promising new energy technologies,” ExxonMobil CEO Rex Tillerson said on release of the report.
“But we also see a tremendous challenge, and that is how to meet the world’s growing energy needs while also reducing the impact of energy use on the environment.”
Publication of the global energy study at the time of the UN
Controlling carbon emission within such an environment can only get more, not less, difficult. It is also potentially hugely costly.
Point Carbon last month calculated that it might cost ExxonMobil $5.9bn (€4.0bn) a year simply to purchase the carbon credits it would need to operate under a potential
ExxonMobil says that natural gas would become the most economically attractive fossil fuel for new power stations if CO2 were priced at $30/tonne, a carbon price it expects to be reached over the next 10 years. (In its November study, Point Carbon assumed a carbon price of $15/tonne.)
At $60/tonne, nuclear and wind power are attractive and high growth for both are assumed in the ExxonMobil study. By 2030, 40% of the world’s electricity will be generated by nuclear and renewable fuels, it says.
But carbon capture and storage schemes at this emission price per tonne still look expensive, as do large-scale solar power facilities.
ExxonMobil, however, forecasts that wind, solar and biofuels will grow at nearly 10% a year on average to 2030 but, because they are starting from a low base, will still only be contributing about 2.5% of global energy needs.
The company tellingly says that “the most important ‘fuel’ of all will be energy saved through fuel efficiency” and in doing so points to opportunities for producers of chemicals.
It estimates that efficiency gains of about 300 quadrillion Btu a year can be achieved by 2030, which is equivalent to twice the growth in energy demand over the period.
Now that’s a gain that will require materials and innovation - the backbone for the chemicals industry.
Sector companies face a huge energy challenge certainly and, more likely than not, a costly burden as they try to reduce carbon emissions or pay more for the CO2 they produce.
They will, however, be able to tap into growing markets for energy efficient materials and the demand for more energy efficient products.
($1 = €0.68)
|
http://www.icis.com/Articles/2009/12/09/9317898/insight-exxonmobil-energy-review-highlights-cost-and.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
in reply to
Comparing Perl with C#, a simple templating example
Your example is my first intro to C#. I probably won't live long enough to ever have to program in C# since I get to pick the languages that I use, but I can see a serious effort by MS to make something useful. To the 'C++' programmers, it is a natural extension to what they already know. But, I've never really trusted MS for their marketing and development plan. I hold some serious grudges with them for having intentionally gone after the destruction of non-proprietary (read "open") standards. They were doing this back when they first introduced their C-language compiler and libraries. They weren't alone in doing that. I criticized Borland for the same sin.
It is hard to embrace a technology you don't trust.
"I probably won't live long enough to ever have to program in C#"
Try it at least once, at least I will ;-)
As a matter of fact, one of our GUI user interface needs to be totally rewritten becaue of the user requirement changed. I am looking at C#, it was in Open Road (an engres thingy).
BTW, for most of the application areas, C# is not targetting any thing else, but Java. GUI application is a good candidator to try C# with, because of the IDE. I took a look at the way C# defines event, handles event are quite interesting. C# even tries to beat Java with small things like how to define getters and setters.
Perl should not be the one feels the most pressure from C#. For example, in this case, as a GUI interface thing, at least from my point of view, Perl is not a candidator any way.
C# even tries to beat Java with small things like how to define getter
+s and setters.
[download]
using System;
public class MyClass
{
private string person = "";
public string Person
{
set
{
this.Person = value;
}
get
{
return this.person;
}
}
public static void Main()
{
MyClass app = new MyClass();
app.run();
}
public void run()
{
string temp = "temp";
// Crash me
this.Person =
|
http://www.perlmonks.org/index.pl?node_id=399909
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
.smtp; 19 20 /*** 21 * SMTPReply stores a set of constants for SMTP reply codes. To interpret 22 * the meaning of the codes, familiarity with RFC 821 is assumed. 23 * The mnemonic constant names are transcriptions from the code descriptions 24 * of RFC 821. 25 ***/ 26 27 public final class SMTPReply 28 { 29 30 public static final int SYSTEM_STATUS = 211; 31 public static final int HELP_MESSAGE = 214; 32 public static final int SERVICE_READY = 220; 33 public static final int SERVICE_CLOSING_TRANSMISSION_CHANNEL = 221; 34 public static final int ACTION_OK = 250; 35 public static final int USER_NOT_LOCAL_WILL_FORWARD = 251; 36 public static final int START_MAIL_INPUT = 354; 37 public static final int SERVICE_NOT_AVAILABLE = 421; 38 public static final int ACTION_NOT_TAKEN = 450; 39 public static final int ACTION_ABORTED = 451; 40 public static final int INSUFFICIENT_STORAGE = 452; 41 public static final int UNRECOGNIZED_COMMAND = 500; 42 public static final int SYNTAX_ERROR_IN_ARGUMENTS = 501; 43 public static final int COMMAND_NOT_IMPLEMENTED = 502; 44 public static final int BAD_COMMAND_SEQUENCE = 503; 45 public static final int COMMAND_NOT_IMPLEMENTED_FOR_PARAMETER = 504; 46 public static final int MAILBOX_UNAVAILABLE = 550; 47 public static final int USER_NOT_LOCAL = 551; 48 public static final int STORAGE_ALLOCATION_EXCEEDED = 552; 49 public static final int MAILBOX_NAME_NOT_ALLOWED = 553; 50 public static final int TRANSACTION_FAILED = 554; 51 52 // Cannot be instantiated 53 private SMTPReply() 54 {} 55 56 /*** 57 * Determine if a reply code is a positive preliminary response. All 58 * codes beginning with a 1 are positive preliminary responses. 59 * Postitive preliminary responses are used to indicate tentative success. 60 * No further commands can be issued to the SMTP server after a positive 61 * preliminary response until a follow up response is received from the 62 * server. 63 * <p> 64 * <b> Note: </b> <em> No SMTP commands defined in RFC 822 provide this 65 * type of reply. </em> 66 * <p> 67 * @param reply The reply code to test. 68 * @return True if a reply code is a postive preliminary response, false 69 * if not. 70 ***/ 71 public static boolean isPositivePreliminary(int reply) 72 { 73 return (reply >= 100 && reply < 200); 74 } 75 76 /*** 77 * Determine if a reply code is a positive completion response. All 78 * codes beginning with a 2 are positive completion responses. 79 * The SMTP server will send a positive completion response on the final 80 * successful completion of a command. 81 * <p> 82 * @param reply The reply code to test. 83 * @return True if a reply code is a postive completion response, false 84 * if not. 85 ***/ 86 public static boolean isPositiveCompletion(int reply) 87 { 88 return (reply >= 200 && reply < 300); 89 } 90 91 /*** 92 * Determine if a reply code is a positive intermediate response. All 93 * codes beginning with a 3 are positive intermediate responses. 94 * The SMTP server will send a positive intermediate response on the 95 * successful completion of one part of a multi-part sequence of 96 * commands. For example, after a successful DATA command, a positive 97 * intermediate response will be sent to indicate that the server is 98 * ready to receive the message data. 99 * <p> 100 * @param reply The reply code to test. 101 * @return True if a reply code is a postive intermediate response, false 102 * if not. 103 ***/ 104 public static boolean isPositiveIntermediate(int reply) 105 { 106 return (reply >= 300 && reply < 400); 107 } 108 109 /*** 110 * Determine if a reply code is a negative transient response. All 111 * codes beginning with a 4 are negative transient responses. 112 * The SMTP server will send a negative transient response on the 113 * failure of a command that can be reattempted with success. 114 * <p> 115 * @param reply The reply code to test. 116 * @return True if a reply code is a negative transient response, false 117 * if not. 118 ***/ 119 public static boolean isNegativeTransient(int reply) 120 { 121 return (reply >= 400 && reply < 500); 122 } 123 124 /*** 125 * Determine if a reply code is a negative permanent response. All 126 * codes beginning with a 5 are negative permanent responses. 127 * The SMTP server will send a negative permanent response on the 128 * failure of a command that cannot be reattempted with success. 129 * <p> 130 * @param reply The reply code to test. 131 * @return True if a reply code is a negative permanent response, false 132 * if not. 133 ***/ 134 public static boolean isNegativePermanent(int reply) 135 { 136 return (reply >= 500 && reply < 600); 137 } 138 139 }
|
http://commons.apache.org/proper/commons-net/xref/org/apache/commons/net/smtp/SMTPReply.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
public class Date extends Object implements Serializable, Cloneable, Comparable<Date>
finalize, getClass, notify, notifyAll, wait, wait, wait()
@Deprecated.
Calendar
@Deprecated.
Calendar
@Deprecated..
Calendar
@Deprecated.
DateFormat
@Deprecated public int getYear()
Calendar.get(Calendar.YEAR) - 1900.
Dateobject, as interpreted in the local time zone.
Calendar
@Deprecated.. June 1, 1996, daylight saving time (Eastern Daylight Time) is in use, which is offset only four hours from UTC.because on June 1, 1996, daylight saving)
Calendar.ZONE_OFFSET,
Calendar.DST_OFFSET,
TimeZone.getDefault().
|
http://docs.oracle.com/javase/7/docs/api/java/util/Date.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
How to: Provide Custom Toolbox Items By Using the Managed Package Framework
A VSPackage based on the Managed Package Framework can extend Visual Studio Toolbox functionality by adding controls, objects derived from ToolboxItem objects. Each ToolboxItem is implemented by an object derived from Component.
A VSPackage based on the Managed Package Framework must register itself as a Toolbox control provider through .NET Framework attributes and handle Toolbox-related events.
To configure a VSPackage as a Toolbox Item Provider
Create an instance of the ProvideToolboxItemsAttribute applied to the class implementing Package. For example:
namespace Vsip.LoadToolboxMembers { [ProvideToolboxItems(14)] [DefaultRegistryRoot("Software\\Microsoft\\VisualStudio\\8.0")] [InstalledProductRegistration(false, "#100", "#102", "1.0", IconResourceID = 400)] [ProvideLoadKey("Standard", "1.0", "Package Name", "Company", 1)] [ProvideMenuResource(1000, 1)] [Guid("YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY")] public class LoadToolboxMembers : Package {
If the ToolboxItem objects provide non-standard Toolbox Clipboard formats, an instance of ProvideToolboxFormatAttribute must be applied to the class implementing the Package object for each Clipboard format supported by the ToolboxItem objects that the VSPackage provides.
For more information on supported Toolbox Clipboard formats, see Toolbox (Visual Studio SDK).
If the VSPackage provides the dynamic configuration of ToolboxItem, it must:
Apply an instance of ProvideToolboxItemConfigurationAttribute constructed using the Type that the package uses to implement the IConfigureToolboxItem interface.
On a public class independent of the VSPackage's Package, the VSPackage must implement the IConfigureToolboxItem interface.
An instance of the ProvideAssemblyFilterAttribute must be applied to the class implementing IConfigureToolboxItem, using a string containing a selection criteria (filter) as the argument to the ProvideToolboxItemConfigurationAttribute instance's constructor.
For information on how to notify the Visual Studio environment that a VSPackage provides Toolbox controls, see Registering Toolbox Support Features.
For an example illustrating how one might implement IConfigureToolboxItem support, see Walkthrough: Dynamic Customization of ToolboxItem Configuration.
VSPackages providing a ToolboxItem must handle ToolboxInitialized and ToolboxUpgraded events.
Implement handlers for the ToolboxInitialized and ToolboxUpgraded events:
This is typically done in the Package implementation's Initialize method:
For an example of how to implement handlers for ToolboxInitialized and ToolboxUpgraded events, see Walkthrough: Autoloading Toolbox Items.
The underlying implementation of a Toolbox control must be derived from Component and encapsulated in the default or a derived implementation of the ToolboxItem object.
The easiest way to provide a Component-derived implementation of Toolbox controls is by extending an object derived from Control, in particular, the UserControl class.
To create Toolbox controls
Use Solution Explorer's Add New Item command to create a Toolbox object that implements UserControl.
For more information on authoring Windows Forms controls and toolbox controls, see Developing Custom Windows Forms Controls with the .NET Framework or Walkthrough: Autoloading Toolbox Items.
(Optional) An application can choose to use a custom object derived from the ToolboxItem object to provide its Toolbox control to the Toolbox.
A custom implementation derived from ToolboxItem can extend an application by providing greater control over how the ToolboxItem data is serialized, enhanced handling of designer metadata, support for non-standard Clipboard formats, and functionality that allows end-user interaction.
In the example, users are prompted by a dialog box to select features:
[ToolboxItemAttribute(typeof(CustomControl))] [Serializable] class CustomControl : ToolboxItem { public CustomControl(Type type) : base(typeof(CustomControl)) {} public CustomControl(Type type, Bitmap icon) : base(typeof(SCustomControl)) { this.DisplayName = "CustomContorl"; this.Bitmap = icon; } private CustomControl(SerializationInfo info, StreamingContext context) { Deserialize(info, context); } protected override IComponent[] CreateComponentsCore(IDesignerHost host) { CustomControlDialog dialog = new CustomControlDialog(host); DialogResult dialogResult = dialog.ShowDialog(); if (dialogResult == DialogResult.OK) { IComponent component = (IComponent)dialog.CustomInstance; IContainer container = host.Container; container.Add(component); return new IComponent[] { component }; } else { return new IComponent[] {}; } } }
To be added to the Toolbox, a control must be contained in an instance of ToolboxItem or of an object derived from ToolboxItem and then be added to the Toolbox using the IToolboxService interface.
To encapsulate and add Toolbox controls
Encapsulate the Component implementation in an instance of a ToolboxItem object or a ToolboxItem-derived object by calling that object's Initialize method with the implementing component's System.Type:
Above is an example of an object userControl derived from UserControl (an instance of the ToolboxControl1 object shown above) being used to construct a new ToolboxItem.
Use the Toolbox service (IToolboxService) to add the ToolboxItem object constructed from the underlying control implementation.
In the example below, access to the Toolbox service is obtained, some of the properties of the ToolboxItem instance customItem are set, and then customItem is added to the Toolbox:
Applying attributes to the class implementing a toolbox control allows the Visual Studio environment or a Visual Studio SDK based application to use reflection to automatically detect and properly add controls to the Toolbox.
To apply reflection and attributes to Toolbox controls
Identify all objects used to implement Toolbox controls with instances of ToolboxItemAttribute.
The type of instance of ToolboxItemAttribute to an object will determines if and how a ToolboxItem is constructed from it.
Applying an instance of ToolboxItemAttribute constructed with a BOOLEAN value of false to an object makes that object unavailable to the Toolbox through reflection.
This can be useful to isolate an object, such as a UserControl from the Toolbox during development.
Applying an instance of ToolboxItemAttribute constructed with a BOOLEAN value of true to an object makes that object available to the Toolbox through reflection and requires that the object be added to the Toolbox using a default ToolboxItem object.
Applying an instance of ToolboxItemAttribute constructed with the Type of a custom object derived from ToolboxItem makes the object available to the Toolbox through reflection and requires that the object be added to the Toolbox using this custom object derived from ToolboxItem.
Specify (to the Visual Studio environment's reflection mechanism) the bitmap to use for displaying the Toolbox control in the Toolbox by adding an instance of ToolboxBitmapAttribute to the Toolbox control implementation.
If needed, apply instances of ToolboxItemFilterAttribute to ToolboxItem objects to use reflection to statically mark them for use with objects that have a matching attribute.
In the example below, a Toolbox control's implementation has an instance of ProvideAssemblyFilterAttribute applied to it, which makes that control available in the Toolbox only when the current working document is a UserControl designers
There are three basic techniques for using reflection to autoloading ToolboxItem.
Using the ToolService Functionality to Retrieve Toolbox Controls
The ToolboxService provides VSPackages with the static GetToolboxItems methods that use reflection to scan assemblies for all types that support toolbox items, and return items for those types. To be returned, a toolbox item must:
Be public.
Implement the IComponent class.
Not be abstract.
Have a ToolboxItemAttribute on its type.
Not have a ToolboxItemAttribute set to false on its type
Not contain generic parameters.
To obtain this list
Create an instance of Assembly referring to the assembly that is to be scanned for ToolboxItem objects.
Call GetToolboxItems, returning an ICollection object containing a list of the appropriate objects.
Use GetService to obtain access to IToolboxService, and use its AddToolboxItem method to add items from the returned ICollection object to the Toolbox.
The code below queries the running application and obtains a list of all its ToolboxItem objects and loads them. For an example illustrating this in running code, see the Initialization method in Walkthrough: Dynamic Customization of ToolboxItem Configuration.
protected ICollection ToolboxItemList = null; ToolboxItemList = ToolboxService.GetToolboxItems(Assembly.GetExecutingAssembly(), ""); if (ToolboxItemList == null){ throw new ApplicationException("Unable to generate a toolbox Items listing for " + GetType().FullName); } IToolboxService toolboxService = GetService(typeof(IToolboxService)) as IToolboxService; foreach (ToolboxItem itemFromList in ToolboxItemList){ toolboxService.AddToolboxItem(itemFromList, CategoryTab); }
Using Embedded Text Resources to Autoload Toolbox Controls
A text resource in an assembly containing a properly formatted list of Toolbox controls can be used by ParseToolboxResource to automatically load a Toolbox control if properly formatted.
A text resource containing a list of objects to load must be available in an assembly accessible to the VSPackage.
To add and make available a text resource to the assembly
In Solution Explorer, right-click a the project.
Point to Add, then click New Item.
In the Add New Item dialog box, select Text File and supply a name.
In Solution Explorer, right-click the newly created text file and set the Build Action property to Embedded Resource.
Entries for the Toolbox control to be loaded must contain the name of the implementing class, the name of the assembly containing it.
For information on the format of Toolbox controls entries to the embedded text resource, see the ParseToolboxResource reference page.
Set up a search path for the files containing the assemblies hosting Toolbox control objects.
ParseToolboxResource, searches only directories specified in the registry entry HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\<version>\AssemblyFolders, where <version> is the version number of the release of Visual Studio (for example, 8.0).
For details on the correct format of the AssemblyFolder registry entries, see the ParseToolboxResource reference page.
Obtain an instance of Synchronized accessing the embedded text resource, and, if localization support is needed for category names, an instance of ResourceManager, and use these to invoke the ParseToolboxResource method.
ResourceManager rm = new ResourceManager("TbxCategories", Assembly.GetExecutingAssembly()); Stream toolboxStream = TbxItemProvider.GetType().Assembly.GetManifestResourceStream("ToolboxItems.txt"); if (toolboxStream != null) { using (TextReader reader = new StreamReader(toolboxStream)) { ParseToolboxResource(reader, rm); }
In the example above, a list contained in an embedded text resource in the assembly containing the class TbxItemProvider is passed to ParseToolboxResource along with the TbxCategories string resources.
The method will search all the files containing assemblies in the directories specified under the AssemblyFolders registry entry for the Toolbox controls listed in the resource and load them.
Explicitly Using Reflection to Autoload Toolbox Controls
If it is necessary to explicitly query assemblies for information about the Toolbox controls they contain, rather than delegating the task to GetToolboxItems, this can be done.
To explicitly use reflection to autoload Toolbox controls
Create an instance of Assembly, referring to each assembly that is to be scanned for ToolboxItem objects.
For each assembly to be scanned, use the Assembly object's GetTypes method to obtain a list of each System.Type in the assembly.
Verify that the type is not abstract and supports the IComponent interface (all implementations of Toolbox controls used to instantiate a ToolboxItem object must implement this interface).
Obtain the attributes of Type and use this information to determine if the VSPackage wishes to load the object.
Use GetConstructor to obtain constructors for the ToolboxItem objects that the Toolbox controls require.
Construct the ToolboxItem objects and add them to the Toolbox.
To see an example illustrating explicit use of reflection to obtain and autoload Toolbox controls, see the CreateItemList described in Walkthrough: Autoloading Toolbox Items.
A VSPackage can exercise additional control over when and how a Toolbox control is displayed by the Toolbox, through the implementation of IConfigureToolboxItem, and use of ProvideAssemblyFilterAttribute, and ProvideToolboxItemConfigurationAttribute.
Applying ToolboxItemFilterAttribute instances to a class provides only static control over when and how a Toolbox control is available.
To create dynamic configuration support for Toolbox controls
Construct a class implementing the IConfigureToolboxItem interface as part of a VSPackage.
Associate the implementation of IConfigureToolboxItem with the objects in specific assemblies by applying an instance of the ProvideAssemblyFilterAttribute to it.
The example below supplies a dynamic configuration for Toolbox control object assemblies within the Vsip.* namespace and requiring that certain ToolboxItem objects be visible only with UserControl-based designers and other never visible with UserControl-based designers.
[ProvideAssemblyFilterAttribute("Vsip.*, Version=*, Culture=*, PublicKeyToken=*")] [GuidAttribute("XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX")] public sealed class ToolboxConfig : IConfigureToolboxItem { public ToolboxConfig() { } /// <summary> /// Adds extra configuration information to this toolbox item. /// </summary> public void ConfigureToolboxItem(ToolboxItem item) { if (item == null) return; //hide from .NET Compact Framework on the device designer. ToolboxItemFilterAttribute newFilter = null; if (item.TypeName == typeof(ToolboxControl1).ToString()) { newFilter = new ToolboxItemFilterAttribute("System.Windows.Forms.UserControl", ToolboxItemFilterType.Require); } else if (item.TypeName == typeof(ToolboxControl2).ToString()) { newFilter = new ToolboxItemFilterAttribute("System.Windows.Forms.UserControl", ToolboxItemFilterType.Prevent); } if (newFilter != null) { ArrayList array = new ArrayList(); array.Add(newFilter); item.Filter = (ToolboxItemFilterAttribute[]) array.ToArray(typeof(ToolboxItemFilterAttribute)); } } } }
Register a VSPackage as providing a specific implementation of IConfigureToolboxItem by applying an instance of ProvideToolboxItemConfigurationAttribute to the VSPackage's implementation of Package.
The example below would inform the Visual Studio environment that the package implemented by Vsip.ItemConfiguration.ItemConfiguration provides the class Vsip.ItemConfiguration.ToolboxConfiguration to support dynamic ToolboxItem.
[ProvideToolboxItemsAttribute(3)] [DefaultRegistryRoot("Software\\Microsoft\\VisualStudio\\8.0")] [InstalledProductRegistration(false, "#100", "#102", "1.0", IconResourceID = 400)] [ProvideLoadKey("Standard", "1.0", "Package Name", "Company", 1)] [ProvideMenuResource(1000, 1)] [ProvideToolboxItemConfigurationAttribute(typeof(ToolboxConfig))] [GuidAttribute("YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY")] public class ItemConfiguration : Package
In addition to being added to the Toolbox itself, ToolboxItem objects and their implementations can be used to extend the drag-and-drop support in the Visual Studio IDE. This can allow arbitrary Clipboard formats to be exposed to the Toolbox and in editors.
VSPackages based on the Managed Package Framework must register as providing custom Toolbox item Clipboard formats, by applying an instance of ProvideToolboxFormatAttribute to the class implementing Package.
For more information on registering as a Toolbox provider, see Registering Toolbox Support Features.
To provide custom Clipboard formats and drag-and-drop support with Toolbox controls
Create an implementation of the ToolboxItemCreatorCallback delegate.
This implementation should return a ToolboxItem object that supports the non-standard Clipboard format.
For an example implementation of a ToolboxItemCreatorCallback delegate, see the ToolboxItem and ToolboxItemCreatorCallback reference pages.
Make this implementation of the ToolboxItemCreatorCallback delegate available to the Visual Studio Toolbox for a non-standard toolbox by calling AddCreator.
[GuidAttribute("7D91995B-A799-485e-BFC7-C52545DFB5DD")] [ProvideToolboxFormatAttribute("MyFormat")] public class ItemConfiguration : MSVSIP.Package { public override void Initialize() { /* * */ //"Adding this class as a ToolboxItemCreator"); IToolboxService toolbox = (IToolboxService)host.GetService(typeof(IToolboxService)); if (toolbox != null) { toolboxCreator = new ToolboxItemCreatorCallback(this.OnCreateToolboxItem); toolbox.AddCreator(toolboxCreator, "MyFormat", host); } private ToolboxItem OnCreateToolboxItem(object serializedData, string format) { /* * */ } } }
|
http://msdn.microsoft.com/en-US/library/bb166486(v=vs.90).aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
HTML/next
From W3C Wiki
HTML.next
HTML.next in W3C bugzilla
Ideas for HTML.next. — Sam Ruby
Do not hesitate to make lists. A good source of ideas are the bugs resolved LATER.
See also (and copy from there to here)
- - Proposed HTML elements and attributes
Ideas List
New Semantics
Decompress Element <decompress> to integrate files from ZIP folders into webpages directly
Goal: To allow files from compressed folders (primarily .ZIP, and so forth) to be accessible in web browsers. This could have many uses, such as reducing bandwidth for computer and mobile platforms by distributing large HTML or image files through a ZIP folder and allowing them to be accessible through a web browser.
First, specify the ZIP folder that carries the files that you wish to be displayed on your webpage.
Example: <decompress href=””>
This informs the browser that the zip folder “familyreunion.zip” contains files that may need to be used in the HTML document. After the decompress element has been used, files from the ZIP folder can be integrated into the webpage like the following examples:
<a href=”familyreunion.zip/html/activities.html”>Activities from our family reunion</a> <img src=”familyreunion.zip/img/familyreunion1.jpg”>
In the first line of code from the example, the anchor tag provides a link to an HTML document inside the file “familyreunion.zip”. Since the ZIP folder has already been specified in the decompress tag, the anchor tag automatically is aware of where to go inside the ZIP folder if the user clicks on the link “Activities from our family reunion.”
On the second line of code from the example, the image tag works likewise but grabs a photo from the specified “familyreunion.zip” and sticks it on the current page the user is browsing.
Just as note, it is imperative that each ZIP file that contains files to be integrated into a webpage is called through the decompress tag. Not doing so might have the browser prompt the user to save the ZIP folder to the user’s hard drive instead of displaying the contents directly in the webpage.
--Jace Voracek November 5, 2011
Isn't this already achieved through the
Content-Encoding and
Transfer-Encoding headers of HTTP/1.1?
Semantics for expressing titles and authors
Being able to identify the title of a book, blog posts, movie, etc. and to relate to it the authors even if the markup is being spread over a few paragraph. In a pseudo markup to illustrate what could be the links in between the items. That would not be mandatory to establish the relations in between them.
[title: The praise of Shadow id:praise by:junichiro] is a book written by [author: Junichiro Tanizaki id:junichiro] explaining … etc.
<location> element (like <time>) for expressing geo information, eg with attributes lat, long, altitude
Eg <location lat=54.3 long=32 altitude=540>Bill's mother's house</location
<datagrid> element
<teaser> Element
I've made this proposal elsewhere, but here seems a good place to reiterate it. The <teaser> element is intended to be a wrapper around a summary block of content with an associated link to a fuller block of content. You see similar structures all over the web, from search result listings to front pages for blogging sites, in which you generally have a title (often linked), a summary block with or without some kind of media resource, and either a _more_ link or a URL to the full article. In general it would be a sectional element that might be placed in another sectional element, such as <nav> pages:
<nav> <teaser> <header> <h1><a href="">My First Cool Article</a></h1> </header> <p>This is my first article on the page, and it's really cool.</p> <footer> <time>3 Days Ago</time> <div><a href="" ></a></div> </footer> </teaser> <teaser> <header> <h1><a href="" >My Second Cool Article</a></h1> </header> <p>This article is on superconducting fields, and is even cooler than my first article.</p> <footer> <time>1 Days Ago</time> <div> <a href="" ></a </div> </footer> </teaser> </nav>
There are a number of good reasons for adopting the <teaser> element:
- It describes a common, frequently used structure in HTML.
- There are currently no elements that serve the same purpose (the closest may be <cite>, but that more commonly is used specifically as a footnote citation mechanism, and is not specifically aware of the <header>/<footer> structure of HTML5).
- It increases search engine optimization and component control optimization, as different widgets could be used to render this structure in different ways from a list of items to a shoutbox or similar component.
- It doesn't necessarily participate in the list numbering mechanism.
- It can be used with internal named anchorst to make a fast TOC.
- It works well in the blogging model that HTML5 seems to be adopting as the foundation for sectional content.
Importing font file formats using a "fontsrc" tag attribute
Being able to integrate specific font file formats into a webpage using a "fontsrc" attribute. Such would be used for purposes of allowing a "special" font to be displayed on a page that would typically be unrecognized by other web browsers. This would allow a developer to use unique special characters on the page, or developers to customize the font of languages that are sometimes difficult to read because of jumbled pixels. (Arabic, Farsi, etc.) It may be best to use this element with the common .TTF, .FNT, and .TF formats.
For example, the font format could be called using the font tag:
<font fontsrc="" size="5"> Text written here will be displayed according to the font file format specified. </font>
--Jace Voracek April 2011
This functionality is already supported via the CSS @font-face rule [1].
--Glenn Adams November 2011
Is necessary a CSS atribute to informate a break line, because no all break lines are semantics. Additionally this change will make the BR tag consistent. Example "BR definition":
BR { breakline:right; } breakline: none | left | right ; none : = default left : = insert a breakline before the element right : = insert a breakline after the element
Another example ;
<LABEL><INPUT><LABEL><INPUT>
Today show all in same line. If you desire one input for line, you need use this:
<LABEL><INPUT><BR><LABEL><INPUT>
My sugestion is:
<LABEL><INPUT><LABEL><INPUT> INPUT { breakline:right; }
or:
INPUT { breakline:left; }
--Edson Carli 22:23, 20 September 2012 (UTC)
Forms
Automatic capitalization in input fields
See the proposal for use cases and details.
Enhancing Authentication Forms
Today most browsers include heuristics for guessing when a page includes a form that represents an authentication or login form with variable accuracy. These heuristics are sometimes confused by password change forms, for example. Adding annotations to forms and fields representing authentication would allow user agents to more accurately recognise these scenarios and improve interoperability in this area.
--Adrian Bateman [MSFT] 11 May 2011
Localising Form Controls
Web developers frequently ask for the ability to localise form controls that include text such as the Browse or Pick button on <input type=file> and the strings that make up date/time controls.
--Adrian Bateman [MSFT] 28 June 2011
ComboBox
Web developers need a inputable dropdown select like other UI system.
<select name="age" inputable="inputable"> <option value="11"></option> <option value="12"></option> </select>
--Yanming Zhou 1 Feb 2012
Multimedia
Games
Implement a pseudo-cursor attribute or possibly element, such that once animated it would fire events identically to the cursor. In many situations such as screencast, or game-replay, it is helpful to be able to review behaviour,
Responsive Images
See the documentation of Responsive Images on this wiki. Please add to that, including incorporating other ideas like the below (Adaptive Images etc.).
Adaptive Images
Read more about Adaptive Images or see Adaptive Image Element.
Adaptive Streaming
See and for some initial work.
There are a number of different adaptive streaming formats (just like there are a number of different progressive download media formats). Many uses cases for adaptive streaming also require some form of protected content. The current HTML5 media elements support selecting from different formats and defining these formats is outside the scope of the HTML working group. Nevertheless, there are aspects of adaptive streaming and protected content that do require enhancement in the scope of HTML to support common use cases (especially those from popular video distribution networks). Specifically, these include:
- Additional media element states to allow the UA to display status (for example negotiating acquisition with the server)
- Additional media element errors (for example failed negotiation)
- Additional media element events (for example bit-rate change)
- Additional media element properties (for example current bit-rate - this may be related to the other QOS metrics)
Audio Balance
HTML5 Audio balance adjustment (left/right) for stereo tracks.
Video enhancements
(fast/slow) rewind, previous/next frame
Fullscreen and Screenshots
domElement.fullScreen(); and
domElement.getImageData(0, 0, domElement.offsetWidth, domElement.offsetHeight);
For fullscreen, see Fullscreen
Authoring
<editor> element
An editor element which allows saving back to the same page one is editing without all the hassle we have to go through today.
<textarea type="wysiwyg">
It could be <textarea type="wysiwyg"> or some such. It should support "flat" yet simplified HTML.
Main purpose of the editor: WYSIWYG editing of structured (semantic) text. Intended usage: blogs, emails, WYSIWYG wiki editing.
Proposed list of supported elements:
- blocks p, ul/li,ol/li,dl/dt/dd,blockquote,pre
- spans: strong/em/a/sup/sub/u/code/strike.
- inline-blocks: img and br(with proper visualization).
- simple tables: table/tr/th/td
Features:
- shall support copy/paste of images from/to system clipboard (can be disabled by attribute)
- shall support copy/paste of text and HTML from/to system clipboard (can be disabled by attribute)
- shall not support inline styles of any kinds.
- may have attribute content-style="some.css" defining styles of elements inside the editor.
- only simple "tag" and "tag.class" selectors in some.css shall be used.
Serialization: It shall be serializeable (set/get) into markup with structure close to this:
<envelope> <head> <image cid="1234" name="..." type=mime/type>base-64-encoded body</image> <image cid="5678" name="..." type=mime/type>base-64-encoded body</image> </head> <body>... mini-html markup ... <img alt=... </body> </envelope>
cid's (content id's) shall support unique naming that allows to do simple body.replace("cid=1234","src=final.url") when content needs to be rendered outside the editor.
Form submission of such input element shall be done as a whole: images, if any, are part of the content. Value of the input element is either plain text, HTML fragment (with images having only external urls) and <envelope> - for JavaScript it is a JSON map {"head":[],body:""}. --Andrew Fedoniouk 05:46, 11 April 2011 (UTC)
copy-paste-ability
Imagine a list like this one:
<ol><li>Lorem<li>Ipsum<li>Dolor<li>sit<li>et cetera</ol>
It will render like this:
- Lorem
- Ipsum
- Dolor
- sit
- et cetera
Now, if you as user/author copy the entire list item with 'Dolor', and paste it into a normal WYSIWYG text editor, its "3." will be replaced with "1." But I want to be able to copy it as number "3."
Components and ECMAScript
"behaviors"
Also known as dynamic subclassing of DOM elements.
This feature is greatly useful for UI component frameworks and toolkits.
Consider this JavaScript code:
document.behaviors["ul.some>li"] = { // the behavior class: attached: function() {...}, detached: function() {...}, onmousedown: function() {...}, onclick: function() {...}, ... };
The behavior is a collection of methods that is assigned (mixed-in) to all elements satisfying the selector used in declaration. When the element gets the behavior then the attached function is invoked. onmousedown and others will be default event handlers for such elements.
For the same purposes we can reuse binding property from CSS (See BECSS at )
document.behaviors["some.name"] = { // registration of the behavior class: attached: function() {...}, detached: function() {...}, onmousedown: function() {...}, onclick: function() {...}, ... };
And in CSS:
ul.some>li { color: red; binding: "some.name" [optional js source url]; }
Example of its use, in CSS we can define
input[type="masked"] { binding:"jquery-ui.masked"; }
And so in markup it will be enough to define
<input type="masked" >
to insert and activate it in place (assuming that behavioral jquery-ui implementation is included).
--Andrew Fedoniouk 06:14, 11 April 2011 (UTC)
include("url");
For the sake of modularity it should be window.include("url"[,mime/type]) method available in JavaScript.
Similar to what @import url(...) does in CSS.
If to assume that behaviors are there then the include() should able to include CSS resources too.
--Andrew Fedoniouk 06:25, 11 April 2011 (UTC)
Allow or require script element to be placed outside <body></body>
script elements or any other inclusion tag (like included css files) should be placed outside the body of a document.
<html> <head> scripts, css etc </head> <body> the place for html </body scripts etc </html>
--Marco Kotrotsos 09 Juli 2012
JavaScript: namespaces and classes
(I don't know is this the right place for JavaScript improvement proposals or not, anyway...)
JavaScript code is getting more and more complex these days. Many libraries can be used on the same page. But lack of namespaces [and classes] is just non-acceptable. May I suggest to consider namespaces and classes in JavaScript from my TIScript? TIScript uses similar (to JS) runtime and syntax so I am pretty sure they can be added to JS as they are.
--Andrew Fedoniouk 06:37, 11 April 2011 (UTC)
Syntax highlighting for <code> elements
I believe this has already been hinted on before, but here goes. Browsers usually already have a syntax highlighting code path, used when viewing HTML, CSS or JavaScript source code. It would be neat to enable syntax highlighting on <code> elements, without the need for crazy server-side or client-side parsers. It should be possible to specify the programming language through an attribute (although I’m not sure @lang should be used for this). The presence of this special attribute could imply that syntax highlighting is desired, but it can still be disabled by attribute. Native code highlighting for HTML, CSS and JS would be great start, but maybe there could be a standardized way for authors to include new programming languages. I’d imagine this would use the shadow DOM to render, and styles could be tweaked through the use of pseudo-elements in CSS.
--Mathias Bynens 05:56, 12 April 2011 (UTC)
Code Formatting for <code> elements
Regarding the proposal above me 'Syntax Highlighting' I would like to extend this with Code Formatting. Same rules and standards but to include formatting.
--Marco Kotrotsos 15:14, 09 May 2012 (UTC)
Document Organisation
Modular Enhancements
Thinking about enhancing HTML after the HTML5 specification moves into Last Call, we believe that an approach similar to the CSS3 structure is favourable. The CSS working group works on a number of different modules in parallel, each progressing at a different rate, some being successful and some falling by the wayside. They all depend on the framework defined by CSS 2.1, the large monolithic spec that they all reference. Most of the modules are additive but some may modify or extend the behaviour of the underlying 2.1 spec.
Applying this notion to HTML, we would like to see new HTML features proposed as cohesive modules with a dependency back to the monolithic HTML5 spec that provides the overall framework. One of the problems with a large specification is that different features stabilise at different rates. Adopting a more modular approach would allow the features with most interest and support to move forward more quickly. This allows implementers to ship without vendor prefixes and yet avoid some of the versioning issues we've seen recently.
--Adrian Bateman [MSFT] 11 May 2011
|
http://www.w3.org/wiki/index.php?title=HTML/next&oldid=63383
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
g” technique or a ”traced placeholder”
SVG to show a preview of the image while it loads,
plus having AVIF support (with the help of gbimage-bridge)!
plus being usable as a container (no more hacks with extra wrappers)
plus being able to work with multiple stacked background images
plus being able to style with Tailwind CSS and suchlike Frameworks
All the glamour (and speed) of
gatsby-(plugin)-image for your Background Images!
Of course styleable with
styled-components and the like!
For usage with Gatsby 3+4’s
gatsby-plugin-image see:
Gatsby 3+4 & gatsby-plugin-image!
ES5 Version
gatsby-background-image has a companion package completely transpiled to
ES5:
gatsby-background-image-es5.
Have a look at its README, it nearly works the same - though with (nearly) all polyfills included to support legacy browsers it’s nearly three times the size of this package.
Table of Contents
- Example Repo
- Procedure
- Install
- Gatsby 3+4 & gatsby-plugin-image
- How to Use
- How to Use with Multiple Images
- How to Use with Art-Direction support
- Configuration & props
- Styling & Passed Through Styles
- Additional props
- Changed props
- props Not Available
- Handling of Remaining props
- Testing
gatsby-background-image
- Contributing
- TODO
- Acknowledgements
Example Repo
gatsby-background-image has an example repository to see its similarities &
differences to
gatsby-image side by side.
It’s located at: gbitest
Procedure
As
gatsby-image is designed to work seamlessly with Gatsby’s native image
processing capabilities powered by GraphQL and Sharp, so is
gatsby-background-image.
To produce optimized background-images, you need only to:
- Import
gatsby-background-imageand use it in place of the built-in
divor suchlike containers.
- Write a GraphQL query using one of the GraphQL “fragments” provided by
gatsby-transformer-sharp
which specify the fields needed by
gatsby-background-image.
The GraphQL query creates multiple thumbnails with optimized JPEG and PNG
compression (or even WebP files for browsers that support them).
The
gatsby-background-image component automatically sets up the
“blur-up” effect as well as lazy loading of images further down the screen.
Install
To add
gatsby-background-image as a dependency to your Gatsby-project use
npm install --save gatsby-background-image
or
yarn add gatsby-background-image
Depending on the gatsby starter you used, you may need to include gatsby-transformer-sharp and gatsby-plugin-sharp as well, and make sure they are installed and included in your gatsby-config.
npm install --save gatsby-transformer-sharp gatsby-plugin-sharp
or
yarn add`, ], }
Tailwind CSS and suchlike Frameworks
With
gatsby-background-image(-es5) @
v0.8.8 it’s now possible to use
Tailwind CSS classes like
md:w-1/2 to style
BackgroundImage.
Therefore a
specialChars plugin option has been introduced to be able to
properly escape such classes, which defaults to
:/ but may be set to other
characters in
gatsby-config.js like the following:
module.exports = { plugins: [ ... { resolve: 'gatsby-background-image-es5', options: { // add your own characters to escape, replacing the default ':/' specialChars: '/:', }, }, ... ], };
Important:
If you support Safari (older versions) and/or Internet Explorer, you have to
install the
IntersectionObserver polyfill.
As - at the time of writing - neither fully implements the feature (see caniuse.com).
A solution to this issue was mentioned in a comment over at gatsby-image/issues
and you are able to apply it the following way:
1. Install the
intersection-observer
polyfill package by running:
npm i --save intersection-observer
or
yarn add intersection-observer
2. Dynamically load the polyfill in your
gatsby-browser.js:
// ES5 way // exports.onClientEntry = () => { // ES6 way export const onClientEntry = () => { // IntersectionObserver polyfill for gatsby-background-image (Safari, IE) if (!(`IntersectionObserver` in window)) { import(`intersection-observer`) console.log(`# IntersectionObserver is polyfilled!`) } }
Gatsby 3+4 & gatsby-plugin-image
For the moment, until the next major version for
gatsby-background-image, the
new syntax of image queries is only supported through a companion package called
gbimage-bridge. Head over to its
README
to learn more, but here a TLDR installation instruction:
yarn add gbimage-bridge
or
npm install --save gbimage-bridge
and usage with
BackgroundImage is as follows:
import React from 'react' import { graphql, useStaticQuery } from 'gatsby' import { getImage, GatsbyImage } from "gatsby-plugin-image" import { convertToBgImage } from "gbimage-bridge" import BackgroundImage from 'gatsby-background-image' const GbiBridged = () => { const { placeholderImage } = useStaticQuery( graphql` query { placeholderImage: file(relativePath: { eq: "gatsby-astronaut.png" }) { childImageSharp { gatsbyImageData( width: 200 placeholder: BLURRED formats: [AUTO, WEBP, AVIF] ) } } } ` ) const image = getImage(placeholderImage) // Use like this: const bgImage = convertToBgImage(image) return ( <BackgroundImage Tag="section" // Spread bgImage into BackgroundImage: {...bgImage} preserveStackingContext > <div style={{minHeight: 1000, minWidth: 1000}}> <GatsbyImage image={image} alt={"testimage"}/> </div> </BackgroundImage> ) } export default GbiBridged
But
gbimage-bridge has also a
BgImage wrapper component for this, so read
more over there ; )!
How to Use
Be sure to play around with the Example Repo, as it shows
a few more flavors of using
BackgroundImage, e.g. encapsulating it in a
component : )!
This is what a component using
gatsby-background-image might look like:
import React from 'react' import { graphql, useStaticQuery } from 'gatsby' import styled from 'styled-components' import BackgroundImage from 'gatsby-background-image' const BackgroundSection = ({ className }) => { const data = useStaticQuery( graphql` query { desktop: file(relativePath: { eq: "seamless-bg-desktop.jpg" }) { childImageSharp { fluid(quality: 90, maxWidth: 1920) { ...GatsbyImageSharpFluid_withWebp } } } } ` ) //
And here is the same component with the data retrieved the old way with the StaticQuery component:
import React from 'react' import { graphql, StaticQuery } from 'gatsby' import styled from 'styled-components' import BackgroundImage from 'gatsby-background-image' const BackgroundSection = ({ className }) => ( <StaticQuery query={graphql` query { desktop: file(relativePath: { eq: "seamless-bg-desktop.jpg" }) { childImageSharp { fluid(quality: 90, maxWidth: 1920) { ...GatsbyImageSharpFluid_withWebp } } } } `} render={data => { //
How to Use with Multiple Images
As
gatsby-background-image may be used with multiple backgrounds,
including CSS strings like
rgba() or suchlike this is what a component
using it might look like:
import { graphql, useStaticQuery } from 'gatsby' import React from 'react' import styled from 'styled-components' import BackgroundImage from 'gatsby-background-image' const MultiBackground = ({ className }) => { const { astronaut, seamlessBackground, } = useStaticQuery( graphql` query { astronaut: file(relativePath: { eq: "astronaut.png" }) { childImageSharp { fluid(quality: 100) { ...GatsbyImageSharpFluid_withWebp } } } seamlessBackground: file( relativePath: { eq: "seamless-background.jpg" } ) { childImageSharp { fluid(quality: 100, maxWidth: 420) { ...GatsbyImageSharpFluid_withWebp } } } } ` ) // Watch out for CSS's stacking order, especially when styling the individual // positions! The lowermost image comes last! const backgroundFluidImageStack = [ seamlessBackground.childImageSharp.fluid, `linear-gradient(rgba(220, 15, 15, 0.73), rgba(4, 243, 67, 0.73))` astronaut.childImageSharp.fluid, ].reverse() return ( <BackgroundImage Tag={`section`} id={`test`} className={className} fluid={backgroundFluidImageStack} > <StyledInnerWrapper> <h2> This is a test of multiple background images. </h2> </StyledInnerWrapper> </BackgroundImage> ) } const StyledInnerWrapper = styled.div` margin-top: 10%; display: flex; flex-direction: column; align-items: center; ` const StyledMultiBackground = styled(MultiBackground)` width: 100%; min-height: 100vh; /* You should set a background-size as the default value is "cover"! */ background-size: auto; /* So we won't have the default "lightgray" background-color. */ background-color: transparent; /* Now again, remember the stacking order of CSS: lowermost comes last! */ background-repeat: no-repeat, no-repeat, repeat; background-position: center 155%, center, center; color: #fff; ` export default StyledMultiBackground
How to Use with Art-Direction support
gatsby-background.
Attention: Currently you have to choose between Art-directed and Multiple-Images!
import { graphql, useStaticQuery } from 'gatsby' import React from 'react' import styled from 'styled-components' import BackgroundImage from 'gatsby-background-image' const ArtDirectedBackground = ({ className }) => { const { mobileImage, desktopImage } = useStaticQuery( graphql` query { mobileImage: file(relativePath: { eq: "490x352.jpg" }) { childImageSharp { fluid(maxWidth: 490, quality: 100) { ...GatsbyImageSharpFluid_withWebp } } } desktopImage: file(relativePath: { eq: "tree.jpg" }) { childImageSharp { fluid(quality: 100, maxWidth: 4160) { ...GatsbyImageSharpFluid_withWebp } } } } ` ) // Set up the array of image data and `media` keys. // You can have as many entries as you'd like. const sources = [ mobileImage.childImageSharp.fluid, { ...desktopImage.childImageSharp.fluid, media: `(min-width: 491px)`, }, ] return ( <BackgroundImage Tag={`section`} id={`media-test`} className={className} fluid={sources} > <StyledInnerWrapper> <h2>Hello art-directed gatsby-background-image.</h2> </StyledInnerWrapper> </BackgroundImage> ) } const StyledInnerWrapper = styled.div` margin-top: 10%; display: flex; flex-direction: column; align-items: center; ` const StyledArtDirectedBackground = styled(ArtDirectedBackground)` width: 100%; min-height: 100vh; /* You should set a background-size as the default value is "cover"! */ background-size: auto; /* So we won't have the default "lightgray" background-color. */ background-color: transparent; ` export default StyledArtDirectedBackground
While you could achieve a similar effect with plain CSS media queries,
gatsby-background-image accomplishes this using an internal
HTMLPictureElement,
as well as
window.matchMedia(), which ensures that browsers only download
the image they need for a given breakpoint while preventing
gatsby-image issue #15189.
Configuration & props
gatsby-background-image nearly works the same as
gatsby-image so have a look
at their options & props
to get started.
But be sure to also throw a glance at Additional props, Changed props, props Not Available and Handling of Remaining props as well ; )!
Styling & Passed Through Styles
You may style your
gatsby-background-image BackgroundImage-component every way
you like, be it global CSS, CSS-Modules or even with
styled-components or your
CSS-in-JS “framework” of choice. The
style={{}} prop is supported as well.
Whichever way you choose, every
background-* style declared in the main
class (or the
style={{}} prop) will directly get passed through to the
pseudo-elements as well (so you would have no need for specifically styling them)!
The specificity hereby is in ascending order:
- class-styles
- extracted
background-*styles
style={{}}prop
The three
background- styles seen above are necessary and will default to:
To be able to overwrite them for each pseudo-element individually, you may reset
their values in the
style={{}} prop with an empty string like such:
style={{ // Defaults are overwrite-able by setting one or each of the following: backgroundSize: '', backgroundPosition: '', backgroundRepeat: '', }}
¡But be sure to target the
:before and
:after pseudo-elements in your CSS,
lest your “blurred-up”, traced placeholder SVG or lazy loaded background images
might jump around!
Passed Props for styled-components and suchlike
Perhaps you want to change the style of a
BackgroundImage by passing a prop to
styled-components or suchlike CSS-in-JS libraries like e.g. the following:
// isDarken gets changed in the parent component. const StyledBackground = styled(BackgroundImage)` &::before, &::after { filter: invert( ${({ isDarken }) => { return isDarken ? '80%' : '0%' }} ); } `
But be aware that there happens no
state change inside the
BackgroundImage,
so React won’t rerender it. This can easily be achieved, by settings an
additional
key prop, which changes as well as the prop like thus:
return <StyledBackgound isDarken={isDarken} key={isDarken ? `dark` : `light`} />
Overflow setting
As of
gatsby-background-image(-es5) @
v0.8.3 the superfluous default of
overflow: hidden was removed, as it was only a remnant from the initial
creation of
gbi (see Acknowledgements for more on its
meagre beginnings ; ). As later seen through issue #59,
this might break some CSS styling like
border-radius, so be sure to include it
yourself, should you need such styles. Sorry for the inconvenience % )!
Noscript Styling
As using multiple background images broke with JavaScript disabled, with
v0.8.0
we switched to an added
<style /> element.
Sadly, in build mode or of course with JS disabled there’s no
document with
which to parse the background-styles from given
classNames and pass them down
to the
:before and
:after pseudo-elements.
So, for the moment, to get your
<BackgroundImage /> to look the same with or
without JS, you have to either set their styles with the
style={{}} prop or
explicitly target the
:before and
:after pseudo-elements in your CSS.
Responsive Styling
Using responsive styles on background images is supported in most cases, except when
passthrough is required. This is typically encountered when trying to make
background-* rules apply to the background image as in
issue #71.
In this case, the background styling will not behave responsively. This is difficult
to fix because it is impossible to determine the
@media rules that apply to an element.
However, a suitable workaround is available. For example, if your style looks like this:
#mybg { background-attachment: fixed; } @media screen and (max-width: 600px) { #mybg { background-attachment: scroll; } }
The
::before and
::after pseudo elements can be targeted directly to make your
style look like this:
#mybg, #mybg::before, #mybg::after { background-attachment: fixed; } @media screen and (max-width: 600px) { #mybg, #mybg::before, #mybg::after { background-attachment: scroll; } }
For more information, refer to issue #71.
Multiple Instances of Same Component
Should you decide to use a single instance of a styled
<BackgroundImage /> for
multiple different images, it will automatically add an additional
className,
a hashed 32bit integer of the current
srcSet or
className prefixed with
gbi-,
to prevent erroneous styling of individual instances.
You wouldn’t have added the same class for different CSS
background-image
styles on your own, or would you have ; )?
Be warned: Styling the components
:before &
:after pseudo-elements
within the main classes then only is going to work again for all instances if
you use
!important on its CSS-properties (cause of CSS-specifity).
Additional props
Starting with
v0.7.5 an extra option is available preserving the
CSS stacking context
by deactivating the “opacity hack (
opacity: 0.99;)” used on
<BackgroundImage />
to allow its usage within stacking context changing element styled with e.g.
grid or
flex per default.
Activating
preserveStackingContext prevents this behavior - but allows you to
use any stacking context changing elements (like elements styled with
position: fixed;) yourself as
children.
Starting with
v0.8.19 it’s possible to change the IntersectionObservers’
rootMargin with a prop of the same name.
v1.4.0 added a
keepStatic switch preventing the container from collapsing &
thus keeping all children (will probably be default in next major version).
Changed props
The
fluid or
fixed props may be given as an array of images returned from
fluid or
fixed queries or CSS Strings like
rgba() or such.
The
fadeIn prop may be set to
soft to ignore cached images and always
try to fade in if
critical isn’t set.
props Not Available
As
gatsby-background-image doesn’t use placeholder-images, the following
props from
gatsby-image are not available, of course.
In the absence of the
placeholderStyle prop, additional styling while the image is loading can be accomplished using the
onLoad or
onStartLoad props. Use either method’s callback to toggle a className on the component with your loading styles.
An example of “softening” the blur up using vanilla CSS.
/* MyBackgroundImage.css */ .loading, .loading::before, .loading::after { filter: blur(15px); } /* ...other styles */
// MyBackroundImage.js import React, { useRef } from "react" import BackgroundImage from "gatsby-background-image" import "./MyBackgroundImage.css" const MyBackgroundImage = ({ children, ...props }) => { const bgRef = useRef() return ( <BackgroundImage ref={bgRef} onStartLoad={() => bgRef.current.selfRef.classList.toggle("loading")} onLoad={() => bgRef.current.selfRef.classList.toggle("loading")} {...props} > {children} </BackgroundImage> ) } export default MyBackgroundImage
For the same implementation with styled components, refer to #110.
From
gbi v1.0.0 on the even older
resolutions &
sizes props are removed
as well - but don’t confuse the latter with the possible
sizes image prop in a
fluid image, which of course is still handled.
Handling of Remaining props
After every available prop is handled, the remaining ones get cleaned up and
spread into the
<BackgroundImage />’s container element.
This way you can “safely” add every ARIA or
data-* attribute you might need
without having to use
gatsby-image’s
itemProp ; ).
Testing
gatsby-background-image
As
gbi uses
short-uuid to create its unique classes, you only have to mock
short-uuid’s
generate() function like explained below.
Either in your
jest.setup.js or the top of your individual test file(s) mock
the complete package:
jest.mock('short-uuid')
Then for each
gbi component you want to test, add a
beforeEach():
beforeEach(() => { // Freeze the generated className. const uuid = require('short-uuid') uuid.generate.mockImplementation(() => '73WakrfVbNJBaAmhQtEeDv') })
Now the class name will always be the same and your snapshot tests should work : ).
Contributing
Everyone is more than welcome to contribute to this little package!
Docs, Reviews, Testing, Code - whatever you want to add, just go for it : ). So have a look at our CONTRIBUTING file and give it a go. Thanks in advance!
TODO
For anything you may think necessary tell me by opening an issue or a PR : )!
Acknowledgements
This package started by pilfering
gatsby-images excellent work and adapting
it - but it’s definitely outgrowing those wee beginnings.
Thanks go to its creators & the @gatsbyjs Team, anyways : )!
|
https://v4.gatsbyjs.com/plugins/gatsby-background-image
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
React-spring
My favorite solution for UI animations when working with React is react-spring, a spring-physics based animation library.
I love it for its simple, declarative, hook-based API and animation updates without re-renders.
In case you're not familiar, the code might look something like:
import { animated, useSpring } from 'react-spring' function Component({ visible }) { const { opacity } = useSpring({ from: { opacity: 0 }, to: { opacity: visible ? 1 : 0 }, config: { mass: 2, tension: 280, friction: 12, clamp: true } }) return <animated.div style={{ opacity }} /> }
BTW, if you are not familiar, check it out!
However, as a newbie to spring-based animations, I've had a hard time knowing which effect the different config settings would have.
I believe I know what
mass is, and I can sort of imagine what
tension is in the context of a spring. But how would these values impact my animation? I found myself often changing the parameters and replaying the animation in the hope it would look good.
In order to take the guess-work out and get the most out of react-spring, I built a visualizer to help me find the optimal config for a specific animation.
React-spring visualizer
On the left side you can change the config values for spring animations, on the right side you can see the animation itself.
In the default "spring" view, the impact of
mass,
tension,
friction and
clamp on a spring are visualized:
Masschanges the size of the "bob" on the end of the spring.
Tensionchanges the amount the spring is pulled from its resting point.
Frictionchanges the scale of the downward arrow in the top left.
- Selecting
clampadds a barrier just above the spring's resting point.
There are 4 other displays to see how your config will look:
- translate
- scale
- rotate
- opacity
You can access them with the buttons below the visualizer.
If you are happy with your configuration, use the copy-to-clipboard button to copy the settings.
I would really appreciate it if you could have a look & let me know what you think!
Discussion (1)
You made React Spring Visualizer?! How cool!! I love React Spring!
|
https://dev.to/joostkiens/get-the-most-out-of-your-react-spring-configuration-6o3
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Gatsby is a powerful platform for building marketing sites, blogs, e-commerce frontends, and more. You can source data from static files and any number of content management systems. You can process images, add support for our favorite styling technique, transform markdown, and just about anything else you can imagine.
At its core, a Gatsby site is a combination of functionality centered around a single config file,
gatsby-config.js. This config file controls an assortment of site metadata, data type mapping, and most importantly, plugins. Plugins contain large amounts of customizable functionality for turning markdown into pages, processing components into documentation, and even processing images.
Scaling Gatsby
Creating a single Gatsby site works super well. The power of
gatsby-config.js, plugins, and more coalesce to make the experience a breeze. However, what if you want to re-use this configuration on our next site? Sure, you could clone a boilerplate each time, but that gets old, quickly. Wouldn't it be great if you could re-use our gatsby-config.js across projects? That's where starters come in.
Improving Reusability with Starters
One way to create more sites with similar functionality faster is to use starters. Starters are basically whole Gatsby sites that can be scaffolded through the gatsby CLI. This helps you start your project by cloning the boilerplate, installing dependencies, and clearing Git history. The community around Gatsby has built a lot of different starters for various use cases including blogging, working with material design, and documentation.
The problem with starters is that they're one-offs. Starters are boilerplate projects that begin to diverge immediately from upstream and have no easy way of updating when changes are made upstream. There's another approach to boilerplate that has become popular in recent years that fixes some problems with the boilerplate approach such as updating with upstream. One such project is
create-react-app. In the Gatsby world, you can improve on starters similarly with themes.
Truly Reusable Themes in Gatsby
If a single
gatsby-config.js encodes the functionality of a whole Gatsby site, then if you can compose the
gatsby-config.js data structure together you have the base for themes. You can encode portions of our gatsby-config as themes and re-use them across sites. This is a big deal because you can have a theme config (or multiple configs) that composes together with the custom config (for the current site). Upgrading the underlying theme does not undo the customizations, meaning you get upstream improvements to the theme without a difficult manual upgrade process.
Why Themes?
Defining themes as the base composition unit of Gatsby sites allows us to start solving a variety of use cases. For example, when a site gets built as part of a wider product offering it's often the case that one team will build out a suite of functionality, including branding elements, and the other teams will mostly consume this functionality. Themes allow us to distribute this functionality as an npm package and allow the customization of various branding elements through our
gatsby-config.js.
Mechanics of Theming
At a base level, theming combines the
gatsby-config.js of the theme with the
gatsby-config.js of your site. Since it's an experimental feature, you use an experimental namespace to declare themes in the config.
Themes often need to be parameterized for various reasons, such as changing the base url for subsections of a site or applying branding variables. You can do this through the theme options if you define our theme's gatsby-config as a function that returns an object.
Themes also function as plugins and any config passed into the theme in your
gatsby-config.js will also be passed to your theme's
gatsby-*.js files as plugin options. This allows themes to override any settings inherited from the theme's own plugin declarations or apply gatsby lifecycle hooks such as
onCreatePage.
Check out the theme examples in this multi-package repo for more examples of using and building themes:.
Next Steps
Sub Themes and Overriding
This is just the first step and it enables us to experiment with further improvements in userland before merging them into core. Sub-theming, for example, is a critical part of a theming ecosystem that is currently missing from Gatsby. Overriding theme elements is possible on a coarse level right now in userland. If, for example, a theme defines a set of pages using
createPage you can define a helper function that will look for the page component first in the user's site and then fall back to the theme's default implementation.
Then in our theme's
createPage call, you simply use the helper to let the user optionally override the default component.
This doesn't allow us to make more granular overrides of different components, but it does allow us to replace the rendering of pages and other whole elements. Component Shadowing, a more granular method of overriding, is already in the works.
If you want to be involved in the development of theming for Gatsby, join the Spectrum community for Gatsby themes.
I'll also be talking about theming Gatsby at Gatsby Days on Dec 7th covering how Gatsby got here and where theming is going next.
|
https://v4.gatsbyjs.com/blog/2018-11-11-introducing-gatsby-themes
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Low Bandwidth SOAP
August 19, 2003
Introduction172,.
Introducing KSOAP
A key ingredient for any web services application is SOAP. The problem with developing a wireless SOAP/XML application -- and the reason for the above-mentioned JSR172 --.
KSOAP begins with a class called
SoapObject. This is a highly generic class
which allows a wireless application to build SOAP calls. A quick look at the
documentation reveals that the methods
getProperty() and
setProperty() are used to accomplish this functionality.
You might immediately notice the conflict with traditional Java programming inherent
in
this design model; it detracts from the use of static typing. Ideally, application
development goes more smoothly if the data structures are defined statically and are
able to
be marshaled/unmarshaled across the network, just as if they were running within the
same
runtime environment. In other words, imagine having to rely solely on the
Hashtable for all your data structures, assigning each property a key for
access and retrieval and then having to remember each property's unique key and given
type.
By having your data model defined statically, a developer can forget all that and rely on the compiler, not the runtime, to catch the type-mismatch errors. Further, casting is kept to a minimum, and it is easier to take advantage of tool support that depends on static typing to help the programmer.
Fortunately, the developers behind KSOAP recognized the importance of statically defined
data structures; they've provided the necessary interface. By implementing
KvmSerializable, a developer can continue to use his or her own data objects.
Using this interface becomes even more important within large-scale enterprise environments
where the data models might be numerous and previously defined.
An Example Application
In an effort to explain how to take advantage of KSOAP, I have put together a sample application. For the purpose of clarity I have made this application simple and straightforward. It is a wireless-app that enables a fictitious manager to receive, by way of a mobile phone, system alerts that are periodically generated by some anonymous back-end system during the normal course of the day. Upon receiving such an alert he or she might decide to call an administrator, or notify their department, or perhaps even dial-in from home and fix the problem.
This sample application involves just four classes.
SystemAlert.java
The data model
AlertService.java
The Service module generating SystemAlerts.
AlertServlet.java
The HTTP interface that sends and receives SOAP Messages.
AlertClient.java
The MIDLet application residing on the Mobile Phone
The components work together as illustrated below.
There are three basic tasks involved in working with KSOAP.
- Implement the serialization logic of your data objects via the
KvmSerializableinterface.
- Register your data objects with the KSOAP
ClassMap. (This needs to done on both the client and server side.)
- Integrate your services with the KSoapServlet. (Optionally, you can create your own servlet to suit your given needs -- all of the KSOAP source code is freely available.)
Let's begin by taking a look at the
SystemAlert class before it has been
modified to work with KSOAP.
SystemAlert.java (before adding KSOAP)
As you can see the class is intended to be serialized; however, since the CLDC specification has done away with
DataObjectStreams and the
Introspection classes, the traditional
mechanism for passing objects over the network can't be utilized, and a new process
of
(de)serialization must be used.
KvmSerializable provides the mechanisms to
accomplish this task. Examine the same class definition after the changes have been
made.
SystemAlert.java (after adding KSOAP)
As you can see, the methods
getProperty() and
setProperty()
handle all the logic necessary for serialization and deserialization. These methods
are
called behind-the-scenes by the SOAP engine rather than by the application itself,
and they
provide all the logic necessary to map the binary data to its string representation.
The
simple types such as
String,
long, and
int are taken
care of for you by the KSOAP framework. Complex types are handled by implementing
the
mapping logic in a similar fashion.
While implementing mapping logic for each and every object can quickly become a tedious exercise, I have created a helpful Ant task to speed the process along. Using this Ant task, a developer need not bother with any of the mapping logic at all; it is created and added automatically. This will save time for the large-scale projects with scores of data objects needing to be serialized.
Servlet
The next piece to examine is the servlet that will function as the interface between
your
service and outside world (i.e. the mobile phone or some other microdevice). This
can be
best accomplished by extending
KSoapServlet, which is available in the KSOAP
package, and overriding its
init() method as shown below.
public void init(ServletConfig cfg) throws ServletException { super.init(cfg); ClassMap classMap = getClassMap (); classMap.addMapping("localNameSpace", "SystemAlert", new SystemAlert().getClass()); AlertService alertService = new AlertService(); instanceMap = new HashMap(); instanceMap.put("AlertService", alertService); }
By registering the
SystemAlert class with the
ClassMap, the SOAP
engine knows to expect a class of that given type, what it should be named and what
namespace to assign it (though the namespace assignment is curiously non-standard.)
Have a
quick look at the associated SOAP message below.
<SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body SOAP-ENV: <n0:getNextSystemAlertResponse <return xsi: <timeStamp xsi:1057726422543</timeStamp> <message xsi:Sample Error Message</message> </return> </n0:getNextSystemAlertResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
Further down the
init() method, a
HashMap is created to store the
given instance of our
AlertService. When the client makes a request it will
send along with it an "AlertService" parameter so that the servlet executes the desired
service. The
getInstance() methods show how this is key is utilized and how the
service object is retrieved. As a conscientious developer you may wish to enforce
stronger
security and access control. This is essentially the location to achieve that; all
your
database and encryption logic can branch off from here.
protected Object getInstance (HttpServletRequest req) { Object result = instanceMap.get(req.getParameter("service")); return result; }
Client
This brings us to the final piece, the client. In order to execute on a mobile phone,
a
developer might choose to use the MIDlet
framework, as I have. Interacting with the
AlertServlet amounts to just a few
lines of code. Below is a snippet of code from AlerterClient.java (available from
the example code.)
For the purposes of this example assume the service resides on a host running at.
public SystemAlert makeRequestToServer() throws Exception { ClassMap classMap = new ClassMap(); classMap.addMapping("localNameSpace", "SystemAlert", new SystemAlert().getClass() ); SoapObject rpc = new SoapObject("urn:xmethods-AlertService", "getNextSystemAlert"); HttpTransport tx = new HttpTransport('HTTP:// somehost.com:8181/services' + "?service=AlertService", "urn:xmethods-AlertService"); tx.setClassMap( classMap ); SystemAlert alert = (SystemAlert) tx.call( rpc ); return alert; }
Once again, we register the
SystemAlert class with the
ClassMap.
Secondly, we create a
SoapObject for the sole purpose of making the request --
your application might wish to send some custom
KvmSerializable object instead.
We then create an
HttpTransport object passing it the destination address for
our
Service; notice that the "service" parameter is also appended.
Lastly, we initiate the transaction via method
tx.call(). If all is well a
SystemAlert object is returned; and if something should go wrong, the SOAP
message will contain a
SOAPFault element instead of its normal payload.
Subsequently, an exception will be thrown from within the
call() method.
As you can see, developing an application like this is not so difficult. By leveraging KSOAP for your wireless application, you can help make it a more powerful and reliable one. Since much of the infrastructure is provided, you as the developer can spend more time focusing on the important aspects of development such as the business logic.
|
https://www.xml.com/pub/a/ws/2003/08/19/ksoap.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
The React ecosystem has a very rich and vast community with many open-source libraries available to help us solve a wide range of problems — from the most basic, common problems, such as state management and forms, to the most complex challenges, such as visual representation of data. For the latter, it can be hard to find the right library for the job.
React libraries are often created and rendered obsolete within a matter of months, and a decision to use a particular library impacts the whole development team. That’s why it’s important to choose the right library for any task or feature you plan to build into your app. Data visualization is no exception.
In this tutorial, we’ll show you how to use Nivo, a data visualization library for React, by building a few charts and implementing them in a React app. We’ll highlight a few key components and show how they work together to make data more visually appealing to your users.
Why use a React chart library?
The most important benefit of using a library for data visualization in React is the ability to implement a wide variety of charts and graphs without reinventing the wheel. You shouldn’t need to spend hours of your precious time trying to implement a simple bar char. A powerful chart library such as Nivo can help you save time, achieve better results, and create a unique user experience for your React app.
Representing data in an aesthetically pleasing way can give your application a fresher, more modern look. Nowadays, most companies use some kind of data visualization feature to deliver an insightful and enjoyable user experience.
Building your own data visualization or chart library is difficult and time-consuming. Many developers who have set out to do so have found that the juice wasn’t worth the squeeze.
What is Nivo?
Nivo is a rich set of data visualization components for React applications. It includes a variety of components that can be used to show graphs and data numbers in modern React apps.
Nivo is built on top of D3.js and comes with powerful extra features such as server-side rendering and declarative charts. It’s a highly customizable data visualization library that provides well-written documentation with many examples and responsive data visualization components. It also supports motion and transitions out-of-the-box.
Why use Nivo instead of D3?
One of the most popular data visualization tools for JavaScript applications is the D3.js library. D3 is a powerful chart library that enables you to bring data to life using HTML, SVG, and CSS.
The only problem with D3.js is that it has a steep learning curve and your code is bound to become quite complex. D3.js makes heavy use of SVG, HTML, and CSS. To use the library correctly, you must have a good understanding of how SVGs and the DOM work.
Don’t get me wrong — D3.js is a very powerful and useful library for building data visualization in modern applications. But most of the time, you don’t want to spend hours trying to create a simple bar chart. React is all about reusability, and Nivo enables you to create reusable components and, in doing so, eliminate hours of debugging.
Nivo is a better choice for data visualization in React because it removes the complexity of building components. with Nivo, you can work more efficiently, customize your components, and create a wide variety of data visualizations with ease.
Installing Nivo
The first step to using Nivo in your React app is to install it in your project:
yarn add @nivo/core
When we install the core package, it doesn’t come with all the components of the library. This might sound like a disadvantage, but it’s actually a good thing.
We don’t want to add a heave package that would increase our bundle size just to use a single component. Instead, we can add the specific package that we need to use a specific component.
Let’s add our first Nivo component package to our React application.
Building a bar chart
To start, we’ll add the bar chart component to use it in our React application:
yarn add @nivo/bar
The bar chart component has many features. It can show data stacked or side by side. It supports both vertical and horizontal layouts and can be customized to render any valid SVG element.
We’re going to import the
bar component into our file so we can start to create our first bar chart using Nivo.
import { ResponsiveBar } from '@nivo/bar'
To get started with the bar component, we need a single prop:
data. The
data prop is an array of objects that we pass to the
ResponsiveBar component. Each object should have a lest one key property to index the data and a key property to determine each series.
We’re going to use the following object:
const data = [ { day: "Monday", degress: 59 }, { day: "Tuesday", degress: 61 }, { day: "Wednesday", degress: 55 }, { day: "Thursday", degress: 78 }, { day: "Friday", degress: 71 }, { day: "Saturday", degress: 56 }, { day: "Sunday", degress: 67 } ];
We pass this
data array to our
ResponsiveBar component. The
ResponsiveBar component needs an
indexBy string to index the data and a
keys property, which is an array of string to use to determine each series.
We’re going to pass our
degrees property as keys and we want to index our data by
days. Our component will end up like this after all that:
const Bar = () => { return ( <ResponsiveBar data={data} keys={["degress"]} indexBy="day" margin={{ top: 50, right: 130, bottom: 50, left: 60 }} padding={0.4} valueScale={{ type: "linear" }} colors="#3182CE" animate={true} enableLabel={false} axisTop={null} axisRight={null} axisLeft={{ tickSize: 5, tickPadding: 5, tickRotation: 0, legend: "degrees", legendPosition: "middle", legendOffset: -40 }} /> ); };
Now we have a beautiful and powerful data visualization component using Nivo! As you can see, with just a few lines of code, we can achieve a powerful result like this:
Building a pie chart
A pie chart displays numerical data as slices of a single circle. This type of data visualization is applicable in virtually all industries and for a wide variety of use cases.
Nivo has a pie chart component, which you can install with the following command:
yarn add @nivo/pie
Similar to the
bar component, the
pie component requires a few props to work: the
data array of objects and the
width and
height for showing your pie chart.
The
data object that we pass to the pie component can be a little bit different. We can use many properties, such as
id,
label,
value, and
color, to customize our pie chart.
We have an array of objects, and each object has a specific property that is going to be used in our pie chart:
- The
idproperty is a unique value for each object of our array
- The
valueproperty is the value of our object that is going to be rendered on our chart
- The
colorobject is a string that we are going to pass as the color of our object on our chart
- The
labelproperty is the label name of our object
const data = [ { id: "java", label: "java", value: 195, color: "hsl(90, 70%, 50%)" }, { id: "erlang", label: "erlang", value: 419, color: "hsl(56, 70%, 50%)" }, { id: "ruby", label: "ruby", value: 407, color: "hsl(103, 70%, 50%)" }, { id: "haskell", label: "haskell", value: 474, color: "hsl(186, 70%, 50%)" }, { id: "go", label: "go", value: 71, color: "hsl(104, 70%, 50%)" } ];
We can also customize our
pie component by passing properties such as
padAngle and
cornerRadius. The
padAngle prop determines the angle between each object in our chart. The
cornerRadius prop is the value we can pass as the border radius of each object.
Our final component ends up like this:
const Pie = () => { return ( <ResponsivePie data={pieData} margin={{ top: 40, right: 80, bottom: 80, left: 80 }} innerRadius={0.5} padAngle={0.7} cornerRadius={3} activeOuterRadiusOffset={8} borderWidth={1} borderColor={{ from: "color", modifiers: [["darker", 0.2]] }} arcLinkLabelsSkipAngle={10} arcLinkLabelsTextColor="#333333" arcLinkLabelsThickness={2} arcLinkLabelsColor={{ from: "color" }} arcLabelsSkipAngle={10} arcLabelsTextColor={{ from: "color", modifiers: [["darker", 2]] }} /> ); };
The final result should look like this:
Conclusion
Nivo provides many different components for creating data visualization in React applications. Its vast list of components includes a calendar component, a Choropleth component (a divided geographical area component), a tree map component, and many more.
You can apply most of the techniques we learned in this tutorial to create other types of data visualization components besides the bar and pie chart. The idea here was to give a glimpse of what you can achieve using Nivo and how powerful this data visualization library is.
There is no right or wrong chart library for a given task; it all depends on the results you’re aiming to achieve and the requirements of your project. That said, the tools and features available with Nivo make it an excellent chart library for creating stunning, impactful data visualizations in React.
Nivo is open-source and the community around it is very active and helpful. The documentation is well-written and you can learn how to use some components in mere minutes. At the end of the day, the wide selection of components and variety of use cases they serve makes Nivo one of the best React chart libraries hands down..
|
https://blog.logrocket.com/building-charts-in-react-with-nivo/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Virtual Actors¶
Introduction¶
Workflows also provides a virtual actors abstraction, which can be thought of as syntactic sugar on top of a dynamic workflow. Virtual actors are like Ray actors, but backed by durable storage instead of a running process. You can also launch sub-workflows from the methods of each virtual actor (e.g., train models in parallel). Here is a basic example:
from ray import workflow import ray @workflow.virtual_actor class Counter: def __init__(self, init_val): self._val = init_val def incr(self, val=1): self._val += val print(self._val) @workflow.virtual_actor.readonly def value(self): return self._val workflow.init() # Initialize a Counter actor with id="my_counter". counter = Counter.get_or_create("my_counter", 0) # Similar to workflow steps, actor methods support: # - `run()`, which will return the value # - `run_async()`, which will return a ObjectRef counter.incr.run(10) assert counter.value.run() == 10 # Non-blocking execution. counter.incr.run_async(10) counter.incr.run(10) assert 30 == ray.get(counter.value.run_async())
In the code above, we define a
Counter virtual actor. When the
Counter is created, its class definition and initial state is logged into storage as a dynamic workflow with
workflow_id="my_counter". When actor methods are called, new steps are dynamically appended to the workflow and executed, returning the new actor state and result.
__dict__ in virtual actors must be able to json serializable, otherwise
__getstate__ and
__setstate__ must be defined, which will be called on each step to restore and save the actor.
We can retrieve the actor via its
workflow_id in another process, to get the value:
counter = workflow.get_actor(workflow_id="counter") assert 30 == counter.value.run()
Readonly methods are not only lower overhead since they skip action logging, but can be executed concurrently with respect to mutating methods on the actor.
Launching sub-workflows from actor methods¶
Inside virtual actor methods, sub-workflow involving other methods of the virtual actor can be launched. These sub-workflows can also include workflow steps defined outside the actor class, for example:
@workflow.step def double(s): return 2 * s @workflow.virtual_actor class Actor: def __init__(self): self.val = 1 def double(self, update): step = double.step(self.val) if not update: # inside the method, a workflow can be launched return step else: # workflow can also be passed to anthoer method return self.update.step(step) def update(self, v): self.val = v return self.val handler = Actor.get_or_create("actor") assert handler.double.run(False) == 2 assert handler.double.run(False) == 2 assert handler.double.run(True) == 2 assert handler.double.run(True) == 4
Actor method ordering¶
Workflow virtual actors provide similar ordering guarantees as Ray actors: the methods will be executed in the same order as they are submitted, provided they are submitted from the same thread. This applies both to
.run() (trivially true) and
.run_async()`, and is also guaranteed to hold under cluster failures. Hence, you can use actor methods as a short-lived queue of work to process for the actor.
When an actor method launches a sub-workflow, that entire sub-workflow will be run as part of the actor method step. This means all steps of the sub-workflow will be guaranteed to complete before any other queued actor method calls are run. However, note that the sub-workflow is not transactional, that is, read-only methods can read intermediate actor state written by steps of the sub-workflow.
Long-lived sub-workflows¶
We do not recommend running long-lived workflows as sub-workflows of a virtual actor. This is because sub-workflows block future actor methods calls from executing while they are running. Instead, you can launch a separate workflow and track its execution using workflow API methods. By generating the workflow id deterministically (ensuring idempotency), no duplicate workflows will be launched even if there is a failure.
@workflow.virtual_actor class ShoppingCart: ... # BAD: blocks until shipping completes, which could be # slow. Until that workflow finishes, no mutating methods # can be called on this actor. def do_checkout(): # Run shipping workflow as sub-workflow of this method. return ship_items.step(self.items)
@workflow.virtual_actor class ShoppingCart: ... # GOOD: the checkout method is non-blocking, and the shipment # status can be monitored via ``self.shipment_workflow_id``. def do_checkout(): # Deterministically generate a workflow id for idempotency. self.shipment_workflow_id = "ship_{}".format(self.order_id) # Run shipping workflow as a separate async workflow. ship_items.step(self.items).run_async( workflow_id=self.shipment_workflow_id)
|
https://docs.ray.io/en/latest/workflows/actors.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
apache
“apache” Code Answer’s
apache
whatever by
Magnificent Monkey Adi
on Oct 13 2021
Donate
Comment
-1
cd /Library/WebServer/Documents/ index.html.en page.php //Apache config file /etc/apache2/httpd.conf //stop sudo apachectl -k stop sudo apachectl stop //start sudo apachectl start sudo apachectl -k start //restart sudo apachectl restart //reload apache web server after editing the config file $ sudo vi /etc/apache2/httpd.conf //Make changes as per your needs. Close and save the file. To reload new changes, run: $sudo apachectl graceful
apache
shell by
Sparkling Swiftlet
on Nov 15 2021
Comment
-1
httpd-2.4.51-win64-VS16.zip 07 Oct '21 10.409k PGP Signature (Public PGP key), SHA1-SHA512 Checksums
Source:
Add a Grepper Answer
Whatever answers related to “apache”
How to start apache2 server
org.apache.http.ProtocolVersion
ubuntu apache status
start apache service
install apache ubuntu
apache new site
tempo sessão apache
Apache2
astop apache
attemting to start apache server
Finding Apache http Process
ubuntu apache2 command
apache2 ensite
apache2 site
install apache2 ubuntu
Apache 2
apache license
Whatever queries related to “apache”
apache
what is apache
what does apache do
apache
what is apache
apache ?
the apache
apache
apache.
what is the apache
reload ssh config
restart ssh service
set hostname debian 10.
ports in use linux
address already in use
ubuntu listening ports
address in use
init_daemon_domain
simple httpserver too long to respond
network service discovery disabled
adonis serve ip external
server host comparison wbesite
change default directory of the server
could not open tor as root in kali
Enable network request in react native
mocha chai serverless
hardhat test ethers provider
kill process in windows using port number
ng serve host 0.0.0.0
address already in use :::8081 windows
Passive and active sockets
How many different virtual connections can exist between a node and an ATM network
pia won't connect can't reach vpn server
how to change quarkus port
without matching <VirtualHost>
mindustyr port
mindustry port
ngrok invalid host header
adb server kill and start
apache proxy pass after domain
start hostednetwork
prosody
redis default port
nmap portscan
exyl ping!
host key differs from the key for hte ip address
ngrok minecraft server
where are host file
redis localhost url
stoped server
tls vs ssl
there is no place 127.0.0.1
https stands for
nginx proxy manager default login
port kill in windows
nmap enable host names
App Transport Security
which service using port 80 command
check process on a given port linux
my localhost working slow macbook
my localhost working slow
mac find who is using port 8080
tmux disconnect other clients
failed to complete tunnel connection ngrok
apache enable mod headers
port 4200 is already in use.
apache2 default config file
windows 10 network registry tweaks
run next on differnet port
pmmp servers
pmmp server host
pmmp server hosting
pocketmine server hosting
pocketmine server host
https default port
the requested url was not found on this server. apache/2.4.41 (ubuntu) server at port 80
apache default document root
nginx proxy pass
add domains to certbot
can't use localhost wsl2
allow localhose wsl2
mac httpd: Could not reliably determine the server's fully qualified domain name, using
apache new site
send different data to sender and clients
socket.io emit to all other clients
xampp apache not starting windows 10 port 80
add ssh key to ssh agent
windows ssh tunnel
simplehttpserver
how to know pid of a port in mac
web dev server
ionic serve in different port
airflow username password
apache airflow default login
kill server on port mac
using proxy with curl
What is peer review ?
.local ubuntu
curl host header
run symfony server
ssh connect with specific port
windows check port usage
how to check status code of a web server through terminal
ss in linux command stands for
ngrok header
nmap check ftp port
free port in mac
access localhost from the internet
npm socket io
ngrok config file
whoami
Restart remote computer
how to get heroku on windows cmd
CORS Socket io v2
Port 3000 is already in use.
krok exam
how to test websocket
horion client
apache get active sites
app.run(host='0.0.0.0' port=5000 debug=true)
PostgreSQL - is the server running locally and accepting connections on unix domain socket /tmp/.s.pgsql.5432
enable ssh ubuntu
something is already running on port 3000. npm start
Something is already running on port 3000
see what user a process is running as
ng serve port command
generate ssh in ubuntu
astop apache
stop apache2
what is caching proxy server
difference between websocket and socket
kill process by port
ngrok command 80 not found
logger: socket /dev/log: No such file or directory
logger: socket /dev/log
proxy scarper
rewrite host headers ngrok
ssh file to server
ssh transfer file
minecraft ssh port forwarding
nginx ssl
address already in use :::1337
Error: listen EADDRINUSE: address already in use :::3000
Chat socket closed unexpectedly
tcpdump only http
How to scp using another port
express server sockjs
line 1030, in run_simple s.bind(server_address)
socket io headers
minikube ingress
localhost
sudo user centos
free port number 3306 in windows 10
ssh with key
free web hosting
android connect to localhost
heroku new url
kill process on port 3000 mac
twilio alternative free
withrouter with connect
Received HTTP code 407 from proxy after CONNECT
docker redis port 6379
linux hosts file
/etc/hosts
nginx https
openssl verify cert
nginx proxy_pass
start portainer
how to add iis to ngrok
iptables block all ssh except 1 ip
infinity hosting nameservers
what is SSL
what does server running on a computer means
rockstar games download
<customErrors mode="Off"/>
web.config
ftp//:192.168.8.2
flush the port 3001
custom localhost domain windows
Command to create the private key and public using the openssl command :
netstat command ports
adding local domain name mac
run a lumen project
start lumen server
npm socket client
how to stop llistening to port 80
Connect to socket.io node.js command line
tomcat-server-error-port-8080-already-in-use
Another web server is already running
ftp server testing online
how to scp from server to local
pdo add port
ssh to server
curl no proxy
nginx reverse proxy multiple folders
xampp port 80 in use
unable to resolve host myhost
what is port 80 used for
how to scp from remote machine to local machine
proxy settings
what is proxy
proxy server
code server config
next server start
mac ipconfig
linux external ip
network security commands linux explained
raspberrypi list all connected networks
install ngrok on server
sntp server india
squid proxy dns blasklist
default port for ftp
what is redis
c socket SO_REUSEADDR
iis applicationhost.config location
live server
connect to samba
apache enable site
proxy shell
redis install ubuntu
how to put a route to host on redhat
socketio client
Problem binding to port 80: Could not bind to IPv4 or IPv6.
transporter
open local hosted server
adding in ssh agent
reconnecting-websocket
ssh powershell
run redis server
socket in laravel
how to open a port in gcp
ngrok stack overflow localhost expose
nmap scan for a port on whole subnet
certbot command
ngrok start server rails
change wrd port number for cmd
Domain name should not be localhost or 127.0.0.1 instamojo
transport.host environment variable
block ccleaner in hosts
wich application is using a port
cisco encryption command
ssh passwordless ssh-copy-id
An administrator must connect to the server via “localhost” to complete setup.
open port 8080 fedora
scrapy proxy pool
how to make a get request over socket c
socket io nodejs
getstaticpaths is required for dynamic ssg
where are apache sites enabled
rpi cli change hostname
start celery server
find current server name for SSMS
serverless templates
required for server start
cisco umbrella service in mac book
socket io new server multiple origins
get host socket.get
ovpn set dns
ncrack ftp brute force
port running linux
Sharing State Between Livewire And Alpine
maximum websocket connections
Socket.io Client Tool
socket io
mikrotik ssh passwordless
mikrotik ssh
odoo domain
Wordpress behind reverse proxy
send the public key to the server windows
download dump file from vm to local (run this at local)
download data from ssh
How to download a file from server using
Enable remote access for MySQL
change next port
ngrok specific folder
Port 80 in use by "Unable to open process"
DNS_PROBE_FINISHED_NXDOMAIN
npx http server
REMOTE HOST IDENTIFICATION HAS CHANGED
windows 10 port forwarding
listen to port
How to connect to remote Redis server?
execute command remotely by ssh
tcp proxy server windows
windows tcp tunnel
Navigate to your localhost web server
ngrok sing up
can we host a wordpress on heroku
server side rendering
how to allow only a specific ip on a docker port using ip tables
what's the default gateway
cpanel
access ssh session
kill process on port windows
raspberry pi ssh headless
hostListener
local
what happens when dhcp relay is enable
socket.io
port 80 xampp
remote desktop
who uses redis
setting ssh for github
Enable SSH Agent
connection
http_proxy
what is Redis used for
ssh port
redis
redis on windows
ngrok
Could not resolve host: api.telegram.org
how to use configure gateway using mac address
"terminal" check privacy control settings tcc.db big sur
kannel port 13002 not listening
progressive web app start url
upload local to remote ssh
install ws-redis
????ip l2tp add tunnel
how to remote test
ambari server
on which connection net works faster
3.6.1 implement vlans and trunking
deploy command for ropsten network
ish app
systemd service location
Disable LAN proxy
-226.1279.727
MS teams process incoming webhooks
cisco call drop troubleshooting
ngrok proxy ngrok.yml
why localhost is not working while 127.0.0.1 works
localhost throw virtual machine
/connect to server
peer dl
kiwix server au demarrage
serverles create function
socket io across two different ports
GlobalHost.ConnectionManager.GetHubContext core 3.1
openssh setup firewall on windows
What are the correct permissions for ~/.ssh?
A server is already running. Check /home/mahi/Desktop/PharmaPlace/tmp/pids/server.pid. Exiting
macos localhost self signed certificate
how to test remote
DNS_PROBE_STARTED
how to connect to ftp port and send data
sss
serverless documentation
Service host/port is not set.
eopkg commands
MobaXterm X11 proxy: Authorisation not recognised
redis for mac
http_user agent vers SQL
"sudo su -"
proxycurl
proxmox default port
R studio server port
http sever with ssl
how to create project in ssh mobaxterm
why does my vpn not work on netflix
centos remote desktop
wandb localhost change base url
fiveserver config
wmware access host machine files
You need to uninstall/disable/reconfigure the blocking application 9:53:56 PM [Tomcat] or reconfigure Tomcat and the Control Panel to listen on a different port
sshd status
remote test
process is listening on a TCP or UDP port Windows
anndroid emu use localhost
solus inasll app command
nginx default server
fichier zone dns
The Tomcat server configuration at \Servers\Tomcat v9.0 Server at localhost-config is missing. Check the server for errors.
Stop ssh connection after execute a script
golang https stop ssl verification
add environment port variouble
openstack show ports
cisco packet tracer privileged exec password
how to run a deno server
what is port in socket programming
tyemporary pyuthon server in bash
run app without port
pool
rendre iptables passées en cli persistentes
what is the livereload port
ansible ping with diferent port
remote desktop can't connect to the remote computer for one of these reasons
zimbra smtp relay authentication with multiple domains
what is freepbx default host name
Start this server up
command to run after exposing route symfony
Using the Cloud SQL Auth proxy to connect to multiple instances
mila azul inurl:"ftp://" -inurl:"http://" -inurl:"https://"
windows get pid by port
client side application
wamp apache cannot start
androguard
netty dns macos
nexus default port
How to ignore SSL issues
sdp ssp stands for andriod
check ssh login history
check if web server is online
Use '--port' to specify a different port.
don't show port a domain is running at .htaccess
Configure Switch to be the primary root for VLAN 1
iptables localhost redirection
Windows internal database connection
can't connect to bmbf site can't connect to
Windows How to Check Open Connection
mc auth servers
vmware issue blocking to run apache server in xampp
"oci cli" proxy
flask socketio usage javascript
how i get post request on local host
wordpress with socket.oi site:stackoverflow.com
how to set proxy in xfce
Port error
wildfly how to check number of open connections
ng serve -- port 5200
org.apache.http.client.httpclient set tls version
ssl thingwrox
windows add domain to localhost
How to use browser-sync to serve files easily
firewalld get zones
paramiko ssh
start service
host must conform to dns 952 subdomain conventions
localhost:30007api/tareas
what does ftw stand for
systemd show sshd services
how to get my site to not show ports
scan open ports on a server
api of mojang for server
python3 ngrok.py
how to monitor my applications net http traffic locally in terminal
export display connection for wsl
psexec connection command
minio port
pterodactyl firewall
2020-07-12T22:14:08.819617Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: sYotK!Dzg8uk
Start a browser on the jump server
HOSTING HEROKU WITH XAMPP
pdo firebird connection
iron exporting ports
Issue with accessing containers externally on wireguard vpn
ssh and execute command in one line
check availability http
192.168.1.137.1
amazon ami 2 php ini
localtechnerd
ssh leave process running
nuget system.web.http.webhost
zerotier ssh server connect to host xxx port 22: Connection timed out
test 10 owasp waf rules using curl
openstacl check over commitment
firewalld redhat
telnet command to check port open or not
how to pass in local in serverless
ssh
node.js check if a remote URL exists
ProxyPassReverse takes one, two or three arguments, a virtual path and a URL for reverse proxy behaviour
create room socket io
sip sdp
server implementation
msi live update no internet connection
iptables FTP passive mode
systemd verifiy if sshd services is active/inactive
ayli_v0_1.local
samp server alternate port
squid allow all IPs
force curl tls
check if browser supports Local Storage
change linked server name
serial ports in asm
loclhost
check status of subprocess
netsh add new interface
debug=true in socketio.run
open ftp servers
Is apache tomcat free?
VicunaHosting
webpacl port
ngrok host header invalid
install apache symfony php
FTPSMTPDNS
problem in choosing port in arduino stack overflow
a unix system is currently running in user mode. when the system executes fread(), the system
allow port to outside
port for samp
mac terminal close ssh connection
Zitalu Odenigbo
access virtual host from lan xampp
getFirstMediaUrl localhost witout 8000
systemd stop/start sshd services
what is default password for database server localhost
DNS DCV: No local authority:
why does port 80 only work
fibrous root
add admin port in server.xml tomcat
daemon redis server
ssh access code
change wrd port
google api keys localhost restriction
why smtp works on application layer
smtp url example
how to check if local storage is available
127.0.0.1:49154
find pid of a port
running mininet as a root
how to pass a variable to remote ssh command
how to use socket.io to send messages between just 2 people
knex kill connection
RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ [R,L]
sidekiq openshift
nginx proxy gunicorn
ssh.net sshpass
ssh config only key
virtualbox no option for host only network
{"t":{"$date":"2021-05-11T16:09:32.697+05:30"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
raspberry pi how to add multiple networks to headless
ListenAndServe
local config
https lightsail wordpress
fbelserver
systemd enable sshd services
get hostname without subdomain in nginx config
running geoserver from a different port ubuntu
SS
windows server port açma
ERROR — DOCUMENT FOLLOWS This web server is running in SSL mode. Try the URL instead.
tun dev with socks
dnl configure.ac
concerpt dmcc
mac list used port
org.apache.cxf.resource.URIResolver.close()
are urls encrypted when using TLS/SSL (https)
if i boght cold war with russin vpn i still play in eruope servar
osp cheap web hosting
React Routing works in local machine but not Live self hosted server...
qt ftp server
qradar host locked:
the services communicate with each other internally using _____________
sshfs user had no write access to mountpoint
istio letsencrypt cluster issuer /.well-known/acme-challenge/
sserembonlineschool.com
ss get process ids
vsftpd only allow localhost
bind another path to default apache root
Add or remove published service ports of an existing service
env data wrong on server cache
sockaddr_in c
ssh cisco firewall 5512
udg:///dev/ttyUSB0
ByPass Proxy Settings
django-sslserver
Client network socket disconnected before secure TLS connection was established
Install both Heimdall Proxy and Central Console on a single host
use node-session with serverless functions
https connection pool max retries exceeded
how to create remote webdriver
run a REST server using the following command:
anything connection to the internet can be hacked
web hosting in uganda
dictionary attack ftp
copy from local to remote machine (fast)
unlock qradar hosts
site:nasa.com inurl:admin
make show linker command
hypixel unknown host
how tohe process between a server and a client work?
"terminal" check privacy control settings tcc.db
crain communications
write int to socket
log or capture telnet session into a file
Enable SSH
vlc out localhost from output sound
proxy_pass real ip
https/1.1
adb bridge localhosr
Feign client
check if chrome remote port is open
pritunl vpn
sed command inside tcl script
Domain name should not be "localhost" or "127.0.0.1 instamojo
web server without port forwarding
2 chaves ssh
from websocket import create_connection
use redis in adonis
From Privileged to Global Configuration in cisco
what is client in web service
How to allow access outside localhost
Your browser sent a request that this server could not understand. Reason: You're speaking plain HTTP to an SSL-enabled server port. Instead use the HTTPS scheme to access this URL, please.
cpanel ssh port
sock
putting remote access to rpi zero
chrome does not support socks5 proxy authentication
pool coonex
raspberry pi lite os headless first boot not connecting to network
how to use heroku on windows
apache
python selenium proxy
add new vlan to cisco switch
minecraft bedrock server cheap
cheap mcpe server
very cheap bedrock server
minecraft pe server
minecraft pe server hosting
best cheap server hosts
visualise port tree freebsd
smart life app cannot connect
trickster server
http 2 keep connection open
proxy_buffers 16 16k; proxy_buffer_size 16k; net worth
latex text size
jake paul
Logan Paul
markdown table
adblock
remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead.
github authentication
gold color code
.htaccess file
wordpress default htaccess
wordpress ht access file
arch
flutter create new project
pm2 starts npm run start
pm2 start npm start
pm2 start with name
'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
full width and height iframe
create a venv
moment date difference in days
node default version
nvm set default
adb is not recognized
fatal: remote origin already exists.
git fatal: remote origin already exists.
dark gray hex
git remote add wrong thing how to remove
remove
Install laravel via composer
windows laravel installer
Install laravel using composer
install laravel globally ubuntu
download the Laravel installer using Compose.
whitespace regex
autoformating for code in vscode when i save it
=== in visual studio
video games
make video game
backslash
delete conda environment
conda delete environment
como eliminar un ambiente de conda
remove a conda environment function
firebase update cloud method
firebase deploy only function
firebase cloud method update
2001 a space odyssey
nginx 403 forbidden
.
|
https://www.codegrepper.com/code-examples/whatever/apache
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
jsweet alternatives and similar libraries
Based on the "Miscellaneous" category.
Alternatively, view jsweet alternatives based on common mentions on social networks and blogs.
FizzBuzz Enterprise EditionFizzBuzz Enterprise Edition is a no-nonsense implementation of FizzBuzz made by serious businessmen for serious business purposes.
JavaCV8.8 6.3 jsweet VS JavaCVJava interface to OpenCV, FFmpeg, and more
Simple Java Mail5.8 8.1 jsweet VS Simple Java MailSimple API, Complex Emails (Jakarta Mail smtp wrapper)
PipelinR3.4 2.0 jsweet VS PipelinRPipelinR is a lightweight command processing pipeline ❍ ⇢ ❍ ⇢ ❍ for your Java awesome app.
yGuard3.3 5.9 jsweet VS yGuardThe open-source Java obfuscation tool working with Ant and Gradle by yWorks - the diagramming experts
JCuda2.7 0.0 jsweet VS JCudaJCuda samples
Less time debugging, more time building
Do you think we are missing an alternative of jsweet or a related project?
README
JSweet: a Java to JavaScript transpiler.
- JSweet is safe and reliable. It provides web applications with type-checking and generates fully type-checked JavaScript programs. It stands on Oracle's Java Compiler (javac) and on Microsoft's TypeScript (tsc).
- JSweet allows you to use your favorite JS library (JSweet+Angular2, JSweet+threejs, IONIC/Cordova, ...).
- JSweet enables code sharing between server-side Java and client-side JavaScript. JSweet provides implementations for the core Java libraries for code sharing and legacy Java migration purpose.
- JSweet is fast, lightweight and fully JavaScript-interoperable. The generated code is regular JavaScript code, which implies no overhead compared to JavaScript, and can directly interoperate with existing JavaScript programs and libraries.
How does it work? JSweet depends on well-typed descriptions of JavaScript APIs, so-called "candies", most of them being automatically generated from TypeScript definition files. These API descriptions in Java can be seen as headers (similarly to *.h header files in C) to bridge JavaSript libraries from Java. There are several sources of candies for existing libraries and you can easily build a candy for any library out there (see more details).
With JSweet, you take advantage of all the Java tooling (IDE's, Maven, ...) to program real JavaScript applications using the latest JavaScript libraries.
Java -> TypeScript -> JavaScript
Here is a first taste of what you get by using JSweet. Consider this simple Java program:
package org.jsweet; import static jsweet.dom.Globals.*; /** * This is a very simple example that just shows an alert. */ public class HelloWorld { public static void main(String[] args) { alert("Hi there!"); } }
Transpiling with JSweet gives the following TypeScript program:
namespace org.jsweet { /** * This is a very simple example that just shows an alert. */ export class HelloWorld { public static main(args : string[]) { alert("Hi there!"); } } } org.jsweet.HelloWorld.main(null);
Which in turn produces the following JavaScript output:
var org; (function (org) { var jsweet; (function (jsweet) { /** * This is a very simple example that just shows an alert. */ var HelloWorld = (function () { function HelloWorld() { } HelloWorld.main = function (args) { alert("Hi there!"); }; return HelloWorld; }()); jsweet.HelloWorld = HelloWorld; })(jsweet = org.jsweet || (org.jsweet = {})); })(org || (org = {})); org.jsweet.HelloWorld.main(null);
More with the live sandbox.
Features
- Full syntax mapping between Java and TypeScript, including classes, interfaces, functional types, union types, tuple types, object types, string types, and so on.
- Extensive support of Java constructs and semantics added since version 1.1.0 (inner classes, anonymous classes, final fields, method overloading, instanceof operator, static initializers, ...).
- Over 1000 JavaScript libraries, frameworks and plugins to write Web and Mobile HTML5 applications (JQuery, Underscore, Angular, Backbone, Cordova, Node.js, and much more).
- A Maven repository containing all the available libraries in Maven artifacts (a.k.a. candies).
- Support for Java basic APIs as the J4TS candy (forked from the GWT's JRE emulation).
- An Eclipse plugin for easy installation and use.
- A Maven plugin to use JSweet from any other IDE or from the command line.
- A Gradle plugin to integrate JSweet with Gradle-based projects.
- A debug mode to enable Java code debugging within your favorite browser.
- A set of nice WEB/Mobile HTML5 examples to get started and get used to JSweet and the most common JavaScript APIs (even more examples in the Examples section).
- Support for bundles to run the generated programs in the most simple way.
- Support for JavaScript modules (commonjs, amd, umd). JSweet programs can run in a browser or in Node.js.
- Support for various EcmaScript target versions (ES3 to ES6).
- Support for async/await idiom
- ...
For more details, go to the language specifications (PDF).
Getting started
- Step 1: Install (or check that you have installed) Git, Node.js and Maven (commands
git,
node,
npmand
mvnshould be in your path).
- Step 2: Clone the jsweet-quickstart project from Github:
bash $ git clone
- Step 3: Run the transpiler to generate the JavaScript code:
bash $ cd jsweet-quickstart $ mvn generate-sources
- Step 4: Check out the result in your browser:
bash $ firefox webapp/index.html
- Step 5: Edit the project and start programming:
- Get access to hundreds of libs (candies)
- Refer to the language specifications to know more about programming with JSweet
- Eclipse users: install the Eclipse plugin to get inline error reporting, build-on-save, and easy configuration UI
More info at.
Examples
- Simple examples illustrating the use of various frameworks in Java (jQuery, Underscore, Backbone, AngularJS, Knockout):
- Simple examples illustrating the use of the Threejs framework in Java:)
- Node.js + Socket.IO + AngularJS:
- Some simple examples to get started with React.js:
- JSweet JAX-RS server example (how to share a Java model between client and server):
- JSweet Cordova / Polymer example:
- JSweet Cordova / Ionic example:
- JSweet Angular 2 example:
- JSweet Angular 2 + PrimeNG:
Sub-projects
This repository is organized in sub-projects. Each sub-project has its own build process.
- JSweet transpiler: the Java to TypeScript/JavaScript compiler.
- JSweet core candy: the core APIs (JavaScript language, JavaScript DOM, and JSweet language utilities).
- JDK runtime: a fork from GWT's JRE emulation to implement main JDK APIs in JSweet/TypeScript/JavaScript.
- JSweet candy generator: a tool to generate Java APIs from TypeScript definition files, and package them as JSweet candies.
- JSweet documentation: JSweet documentation.
Additionally, some tools for JSweet are available in external repositories.
How to build
Please check each sub-project README file.
Contributing
JSweet uses Git Flow.
You can fork this repository. Default branch is develop. Please use
git flow feature start myAwesomeFeature to start working on something great :)
When you are done, you can submit a regular GitHub Pull Request.
License
Please read the LICENSE file.
*Note that all licence references and agreements mentioned in the jsweet README section above are relevant to that project's source code only.
|
https://java.libhunt.com/jsweet-alternatives
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
3 years, 5 months ago.
How to get led to blink, and print times the LED turned on using the ‘printf’ function.
sorry does anyone know how to get this to work.
1. Write a program using ‘DigitalOut’ on the mbed board to blink LED1 at 1Hz and print the number of times the LED turned on using the ‘printf’ function. I tried to use a counter but it did not seem to work.
#include "mbed.h" DigitalOut myled1(LED1); int main() { myled1 = 1; int counter = 0; //initialise counter object to zero void count(){ counter++; //increment counter object db.locate(0); db.printf("%06d", counter); wait(1); to get the 1 hz }
1 Answer
3 years, 4 months ago.
Hi David,
So a few initial notes regarding your program:
- Are you using a seperate library or
Serialobject to define the
dbvariable? I am not sure whether you have pasted your code in its entirety or if it is just a snippet.
- Your
void count(){function declaration needs to be placed outside of
main(), however for a small program like this I would use a
whileloop within
main()instead. This will allow you to blink the LED and increment the counter forever (a loop that never stops running). For example:
#include "mbed.h" DigitalOut myled1(LED1); int main() { myled1 = 1; int counter = 0; //initialise counter object to zero while (true) { myled1 = !myled1; // Turn on/off myled1 counter++; //increment counter object db.locate(0); // Where is your "db" object declared? db.printf("%06d", counter); wait(1); } }
After you add a declaration for your
db object (on line 11 above), you should then be able to view the increment counter
printf's. I also added the code
myled1 = !myled1 to alternate the blinking each loop. The
!myled1 essentially means to turn off the LED if it is currently on, or to turn on the LED if it is currently off.
Also, if you would like to just view the
printf output via your computer, I would take a look at the introductory debugging tutorial here:
Please let me know if you have any questions!
- Jenny, team Mbed
If this solved your question, please make sure to click the "Thanks" link below!
|
https://os.mbed.com/questions/82092/How-to-get-led-to-blink-and-print-times-/
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
serialization in api
“serialization in api” Code Answer
serialization in api
whatever by
Obedient Ocelot
on Jan 14 2021
Comment
1
DE-SERIALIZATION: JSON --> JAVA OBJECT SERIALIZATION: JSON <-- JAVA OBJECT I add jacksonDataBinder dependency in my pom.xml file -> it is a json parser. That is used for converting from java object to json and from json to java object.
Add a Grepper Answer
Whatever answers related to “serialization in api”
what is serialization
what is serialization and deserialization in java
what is serialization in java
what is serialization and deserialization in rest assured
what is serialization in rest assured
web api serializer settings for api controller
serialization in django
what is serialization in django
how i send custom data in model field rest_framework serializer
Django serializer,
Whatever queries related to “serialization in api”
serialization and deserialization
convert api
api serialization
deserialization and serialization
what is serialization and deserialization?
implement serialization and deserialization
serialization deserialization examples
api serialization wehat is it
what is serialization and deserialization?
serialization & deserialization api
serialization and deserialization what is
serialization and deserialization means
what is meant by serialization and deserialization in c#
serialization, and deserialization
value objects can be used for serialization/deserialization
what is serialization in rest api
serialization and deserialization
object serialization and deserialization?
explain serialization and deserialization
serialization vs deserialization
serialization vs deserialization c#
need of serialiation and deserialization
serialization and deserialization *
implement serialization and deserialization ?
what is the difference between serialize and deserialization
what is used to control serialization and deserialization
serialization api
what is serialization and deserialization?
serialization and deserialization operations
what is serialization api
what is serialisation and deserialization
serialization vs deserialization
where is serialization and deserialization used
serialiazation and deserialization
what is serialization & deserialization
java serialization vs deserialization
deserialization / serialisation
what is serialization and deserialization\
what is serialization and deserialization
serialization deserialization
what is serialization and deserialization'
serialization & deserialization
what is serialization in api
difference between serialization and deserialization in c#
serialisation vs deserialization
serialization / deserialization
serialization and deserialization in rest api
convert to api
sérialisation api
serialization
serialisation and deserialization in js
benefits of serialization and deserialization
deserialization vs. serialization
why do we need serialization and deserialization
serialisation deserialization
difference between serialize and deserialize
serialization vs deserialization java
More “Kinda” Related Whatever Answers
View All Whatever Answers »
godot check if object is in group
objectmapper convert list of objects
eloquent where between
enum foreach
Target class [ExamSeeder] does not exist.
Target class [Seeder] does not exist.
twig foreach
whereBetween
laravel between
aurelia repeat.for get index
spring jackson mapper ignore unknown fields
template for loop iteration number
django forloop.counter
set unique true sequelize
413 Request Entity Too Large
print all values of dictionary
elasticsearch query gettign fields
codeigniter dump query
print codeigniter model query
twig classes array to string
how to use between in active record
objectmapper convertvalue to list of objects
load multiple database codeigniter
use forvalues stata
belongs_to optional true
Skip model mutator laravel
laravel ignore mutators
Skip model accessor
where not equal to in codeigniter
loop through map apex
Illuminate\Database\QueryException
left join ef
left join in entity framework 6.0
left join codeigniter
freemysqlhosting keeps deleting tables
hibernate keeps deleting tables
schema default value for field elixir
value map meaning
not able to add values to newly added column sequelize
hash collision when truncating sha-1
escape % in Knex where like query
uml class diagram hashtag
how to return variable from transaction Laravel
A mutable object
how to select all customers that didn't order
raise self.model.DoesNotExist( base.models.Product.DoesNotExist: Product matching query does not exist.
Validation rule unique requires at least 1 parameters.
foreign_key_checks
similarity between abstract class and interface java
linq sum group by
Use LINQ to get items in one List<>, that are not in another List<>
find element in list not in another list c#
delete multiple row in laravel
see index size elasticsearch
aggregate functions are not allowed in WHERE
twig foreach select option
terms aggregation elasticsearch using Key order
write many2many odoo
add array type to sequelize migration
choose locator
ef code first unique constraint
product type scala
mysql select where not in multiple tables
is_single
like codeigniter
populate set in builder
findonebyid
lookup relationship vs master detail relationship
dataannotations number greater than 0
Count the number of classes an element has?
abap loop internal table
linq object list contains
drupal 8 get query string parameters
AUTO_INCREMENT Cannot change column 'id': used in a foreign key constraint
cannot change column 'id': used in a foreign key constraint
is natural and inner join same
updateorcreate
yup schema[(intermediate value)(intermediate value)(intermediate value)] is not a function
there is no unique constraint matching given keys for referenced table
sum two columns in eloquent
find unique values between multiple array
how to check if data is an array or not ruby
maek empty querytset
Laravel eloquent mass update
laravel update
lookup relationship in salesforce
get data in from a collection firestore
cascade laravel
eloquent delete all where
jpa query methods count
super key in dbms with example
delete join select from one table based on multiple values
rails check if key exists
post mapping in spring boot
show all the document in elasticsearch
To view the all document available in indices
how to view all the document available in indices in kibana
map and filter in python
umbraco nested content
concat in laravel 8
drupal 8 get field entities
update hashmap value while iterating
sequelize count in include
what are active record enum
rails enum
Rule in in laravel
intellij select multiple words
drupal 8 database query or condition
typeorm raw query params
collection get firesotre
python - largest val
Finding the largest value in dictionaires in python
key of max value dictionary
hashmap clone
cfscript loop query
Cannot add or update a child row: a foreign key constraint fails
groupby in linq
Explain Entity, Entity Type, and Entity Set in DBMS?
entity framework linq multiple joins
linq where in list
print only values in map
mongodb aggregation $filter multiple conditions
rails distinct
General error: 3780 Referencing column 'user_id' and referenced column 'id' in foreign key constraint 'blog_posts_user_id_foreign' are incompatible.
c# distinct by property
laravel: get id of inserted row
rvm is not a function selecting rubies with 'rvm use ...' will not work. rubymine
twig if ternary check
hashmap put
user as foreign key in django
1 - Cannot delete or update a parent row: a foreign key constraint fails
thymeleaf table for each
yii query not equal
flutter sort list by object property
firebase orderbychild update
count on field value and group by in sequelize
rails array count occurrences of elements
c# dictionary to json
datatable does not contain a definition for asenumerable
filter collection (laravel)
Ruby on rails execute query
Laravel adding Foreign Key Constraints
linq group by multiple
sequelize group by
Laravel retrieving single record
object fromentries example
update queryset in django
Laravel create foreign key column in migration
rails add reference
what does queryselectorAll returns?
ruby array to string with commas
foreach in controller
SLocationWhenInUseUsageDescription
twig for loop
twig for key value
rails find_by
generate entities symfony
Sum values of column based on the unique values of another column
find duplicated entries present in a list
elasticsearch get mapping
get particular key and value from Map
kotlin join strings from list
join linq lambda
get reference field entities
on_member_join not working
find duplicates in lists with LINQ and get count
Sequelize find sort order
get one document based on id in firestore
MySQL repeated values
linq group by
entity framework database sequence
sequelize array of strings
rails add_column after
'dict_values' object is not subscriptable
sql like in codeigniter query
select all if id in another table
Limits and Pagination in Sequelize
firebase search into every child
samtools mapq filter
number of records in a resultset
Laravel eloquent query soft delete
belongsTo called with something that's not a subclass of Sequelize.Model.
foreign key
python split tuples into lists
python split list of tuples in two lists
query files in related list salesforce
Cannot delete or update a parent row: a foreign key constraint fails (
Call to undefined function Database\Seeders\str_random()
entry criteria vs exit criteria
get single column in codeigniter
add id attribute to jQuery steps
how to combine diff colmun value using group by postgres
sequelize findOrCreate
how to convert object in api
serialization in api
deserialization in api
serialization vs deserialization
findoneanddelete id
multiple mapping @requestmapping
spring delete mapping
query max in codigniter 4.1.1
CodeIgniter get_where order_by
eloquest how to select one specific column in database
ruby on rails validates presence of multiple fields
integrity constraint violation: 1452 cannot add or update a child row: a foreign key constraint fails
attach function in laravel
laravel pagination with query string, laravel pagination with string
blade forelse
statsmodels
query for select null in sql for laravel
compare two dates and sort array of objects
find whether one tuple value is available in another tuple
Difference between HashMap and HashTable
hashmap vs hashtable
typeorm findone subquery
Method Illuminate\Database\Eloquent\Collection::delete does not exist.
typeorm subquery
jackson avoid return property
jackson ignore property on return
linq select from array
creating model in ruby on rails
Print a specific value of dictionary
ruby array of symbols shorthand
entity framework return count of records
how to use list of item in modelMapper
laravel collection toQuery
Error: Chunk.entrypoints: Use Chunks.groupsIterable and filter by instanceof Entrypoint instead
linq when name then orderby
hibernate ddl auto property
queryselector get each element
linq distinct
how to iterate map in apex salesforce
filter dict with certain keys
AttributeError: module 'rest_framework.serializers' has no attribute 'ModelSerializers'
keystore get key hashes
on delete of foreign key delete corresponding rows in other table
Convertir Map a Json
sequelize inner join
ternary operator for multiple conditions
raw query in codeigniter
from where in list linq
check if field exists in java
Retrieve MD5 Hash Value of a String
uuid in scala
printing a map in kotlin
Implementation restriction: ContentDocumentLink requires a filter by a single Id on ContentDocumentId or LinkedEntityId using the equals operator or multiple Id's using the IN operator.
check for changed model fields in djnago signal
type list dynamic is not a subtype of type map string dynamic
goods_goodsinfo_goods_cag_id_a7b17a2d_fk_goods_goodscategory_id
thymeleaf th:if two conditions
snowflake select from query id
how to import model_to_dict
ruby activerecord find where less than
how to find a pair in a map of pair
disable foreign key = 0
c why do we have to use pthread_join
spatie media library add only single media
Single file collections add in spatie media library
Single file collections
Laravel return empty relationship on model when condition is true
apex add single quote
codeigniter select union
coldfusion querynew
tstringlist uses delphi
firebase db.collection (specific id)
else clause in xslt
string foreign key
check if an element is in data base linq lambda
select query in ci
adding queryparams resttemplate
knex datatypes
Multiple Ternary operator in php
lodash uniqby
twig conditions
schema knex.js
viewmodelproviders.of(this)
java find in json
Get all non-unique values (i.e.: duplicate/more than one occurrence) in an array
how to use id in aggregate
rails find_by order
rails find_by order limit
how to map through object.keys
golang map find
use or and in twig statement
salesforce map set values
laravel collection unique
where
how to insert duplicate key in hashmap
updating indices intellij
get number of rows in codeigniter model
extract specific key values from nested dictionary
select top 5 in linq c#
querydocumentsnapshot
delete all duplicate rows keep the latest except for one in mysql
graphql pass array as argument
add to subcollection firestore flutter
exclude attribute sequelize query
use multiple database codeigniter
sqequelize attributes exclude
sequelize check if exists
Updating JSON Column in laravel
sidekiq perform_at
Updating JSON Columns
if condition in julia
Except Method C#
sequelize add column existing model
check many keys in objects
th:with in thymeleaf
how to store value in array in controller and pass to view
select only unique values from and to current table
select only distinct values another table
select only distinct values from another table
select only distinct values from another table and excluding from current table
Self-join
what is truth table in logic gates
form builder tcoe
form builder tcode
form builder transaction code
Example of Nested IF Statement
Django values_list
duplicate
laravel collection only
.where ruby
laravel relationship
laravel relations find
one to one relationship
parse query string qs
jdbc join result
swagger parameters multiple query example
linq conditionnally add where clause
HashMap to Pojo
sequelize compare dates in two columns
Where do you use Set, HashMap, List in your framework?
multiselect_1
Database design - relations vs properties
join in perl
make all the numbers unique
rust array unique
lineList.reduceByKey(lambda accum,n: accum + n)
dapper inserting values from different entities
the_field
spark aggregatebykey
Freqtrade - List Data
djb2 hash function c explained
ienumerable vs iqueryable vs ilist vs icollection
find one and upadate
how to get fields of a table in rails
xfce4 whisker menu super key
ruby find max value in array
find duplicates in ArcGIS Pro
laravel collection makeHidden
Linq Select string fields concat as one fieldsub query in linq
xamarin loop add unique buttons
adding a check flag on collection
clojure get key from map entry
insert multiple records on the Database in Entity Framework Core
CompareListsC#Linq
attempt ot compare number with table fivem
has many through syntax
jpa repository spring boot query table not mapped
dig method in ruby How to check whether a nested hash element exists
iterator on std::tuple
xunit can collectionfixture have parameter
yoast describtion filter
python nested object to dict
seq.POSIXt
save multiple models at once
does stream map change original value
duplicate key value violates unique constraint "pg_type_typname_nsp_index"
add class queryselector
Order by relation's column
ruby find lower number array object
How to implement reverse-lookup in enum?
GetLeftSideOfStringLinq
An element or member function of a nested table or varray was referenced
drupal 8 group hook_entity_access
update numof rows codeigniter
select query ci4
searchQueryBuilder: (query, list) flutter
select the first property in an object
multiselect_2
Using LINQ to update/change values in collection
result set metadata
OneToMany
entity framework core update one to many relationship
perl hash count
how to write query to to display record having maximum value
dapper insert record when entity properties have the same names as the SQL columns.
hibernate getList of a table
DB raw with where condition in laravwel
enums if thymeleaf
Extend the 2.1 case study to implement below listed queries. Write separate operations/method to implement each query. a.Query all books in database.
ajax y thymeleaf fragments
xfce whisker menu super key
ternary operator structure.
find duplicates in ArcGIS Pro 1
Native Query
flashback in bigquery
deserialize xml c
request entity too large limit: 102400 feathers
jsonpickle exclude py/object
accessing varchar array from sql
pgp object merge
tcl check if value exists in list
hashin ON
list vs map
assigning value with for each
entitymanager merge example
Convert a list of dictionary into a feature vector
To ignore duplicate keys during 'copy from' in postgresql
multilevel sub categories
filter based on morphToMany
sequelize log Special methods
search in dict as hashing
salesforce cpq apply immediately
clojure merge maps
:app:checkDebugDuplicateClasses
how to add key value pair in object
how to map request in spring boot controller from a table with foreign key
resultsetmetadata
Laravel eloquent mass assignments
When rendering a list what is a key and what is it's purpose?
difference between offer hashmap and hashtable
get data from Instance of Future<DocumentSnapshot>
mapstruct beam mapping for list of maps
identifying relationship
serenity-is advanced filter
CREATE TABLE researcher ( researcherid serial PRIMARY KEY, name text );
whereIn knex
sequelize validate unique index in two columns
loads the given relationships for all models in the collection if the relationships are not already loaded
The object 'DF__tbl_Conta__Creat__61316BF4' is dependent on column 'Created_On'.
my filter
doctrine remove element from arraycollection
ruby array push if not exists
xfce super key
unordered map replace value
Could not map 'Transaction.id'. Maybe a fragment in 'String' is considered a simple type. Mapper continues with Id
abap move corresponding with mapping table
sequelize 4 sort on joined table attribute
$q=mysqli_query($db,"SELECT * from books where name like'%$_POST[search]%' "); if (mysqli_num_rows($q)==0)
in scala, can a lazy data be a variable
you want to extract multiple hits, try this:
multi query set
Same Taxonomy Add Multiple Post Type
linqkit predicatebuilder or and nested combined predicates
sequelize duplicate foreign key
filter by list of ids in linq
how to remove pagination from model view
sorting-a-dictionary-by-value-then-by-key
When ModelMapper maping an entity to Dto, is there a way to automatically map an entity collection to the Dto internal collection field?
multilevel subcategories in laravel
AnnotationException: @OneToOne or @ManyToOne
find_by model fuelphp
test dynamic table
underscore group by two fields
rsa pair from seed
nested table in markdown
return query builder result to array
iteration thymeleaf
required and duplicate form validation codeigniter
mysql clone table with data and add attribute
list object has no attribute intersection
column 'user_id' in on clause is ambiguous
gem friendly_id with multiple column s
sequelize change item
resultsetmetadata in sql
Laravel eloquent mass assignments with json
jekyll map
special security rules for specific query
hibernate query many to many
non-identifying relationship
boolean for duplicate values in a column
twint.run.Followers(c)
what is an ember pacjquery.slim.min.map
knex raw multiple '?'
not equal in racket
request.query_dict hubspot
laravel collection loadMissing
wtforms.validators.Unique
contains in where clause firestore flutter
LDAP query to test if user is member or group
pymol select alternate conformation
insertbefore
hashmap key check
tbh tunr kl fk pgo atbh guav navu kt yu
throw a observablecollection to a list in datalayer
check if the substring is present in the column of the record rails console
ruby make chain method
queryselect get type of elment class or id
extract the desired group, as follows:
salesforce lightning detect key
linq select if different null
$OuTree = Get-ADOrganizationalUnit -Filter * -SearchBase
how to name form another table in codeigniter
child row: a foreign key constraint fails
key exist or not in map
how to count null values
ModelMapper skip a field codegrepper
enum reject
how to map nested list in automapper
django foriegn key filter sample
one and only one in er diagram
different states on same model odoo
public class CatalogItemTypeDto : BaseDto { public string Name { get; set; }
find subset of two hashes in ruby
ruby hash except nested
active record validates
what is the use of @JoinColumn(name="ID", referencedColumnName = "ID")
making a modelserializer field required
group function vs aggregate function
Laravel eloquent upserts
filter query thingworx
random record get with pagination in karavel 8
ignore foreign key constraint in sequelize
how to validate if field is unique based on id using fluentValidation
{', '.join(["'" + x + "'" for x in schema_fields name])}
How to find the number of rows for all views in a schema?
Argument 1 passed to Doctrine\Inflector\Inflector::singularize() must be of the type string, null given,
make a column unique in ef core
how to search for a nested value using where firestore
typegraphql decorators
data and hash required
Joins and group and size to make query return hash
add type to element of data structure in haskell
racket list equal elements
check if doctrine collection is empty twig
Hibernate, how do we define the primary key value generation logic as auto?
ActiveRecord retrieve the number of instances of a given ActiveRecord model
ruby permutation
tstringlist length delphi
ORACLE multiset union distinct
f.name for f in df.schema.fields if isinstance(f.dataType, StringType)]
newuidmap
how should you use content_for and yield
how to count null values with collections
EF Generic Repository Multiple Includes and OrderBy Call
crud typeorm relationship
how to set boolean filter in kendo table
python list comprehension with filter
how to query and join two table with knex
Ada ternary
Generic constraint on constructor function
Get Key's Value
dynamic soql escape the single quote
hibernate set multiple parameter
what is association in oop
stack overflow in Many To One relationship to
does view contains data
what do we mean by lossless join property in dbms
knex join multiple conditions
allow a funciton to pass through a linkedserver open query in short form
Haskell check for list equality
relational operator in switch case
active record create association
check if div is collpased jqeruy
how to get all data from elasticsearch
how to prevent marvel to have indices elasticsearch
add one to one relationship entity framework
ValueError: dictionary update sequence element #0 has length 1; 2 is required
illumina sequencing is also called as
sqlalchemy.exc.ArgumentError: Mapper mapped class Students_of_teacher->students_of_teacher could not assemble any primary key columns for mapped table 'students_of_teacher'
ActiveRecord retrieve all instances of a given ActiveRecord model
Target class [Database\Seeders\VoyagerDatabaseSeeder] does not exist.
To overcome the need to backtrack in constraint satisfaction problem can be eliminated by
construct's one's self identity
SELECT * FROM `phptrip` WHERE `dest`=`BIHAR` LIMIT 0, 25
GET HASH
eloquent search from child table column
subquery in codeigniter
swift constraints
realtime database push multiple values
sqlserver: reverse like search
a div within table behave like table element
if(ResultSet.next())
add foreign key anottation
codeigniter insert if not exist
python list comprehension with filter example
Updated fields only if addToSet get updated
goroutine
Alembic not finding new models
user wildcard cards in queryselectorall
Dictionary Add Entry
inverse of in rails
sequelize findall in array
using index condition instead of index
thymeleaf if
how to get structure of table in codeigniter
does view contains data in sql
laravel defining relationship
Duplicate entry Exception
how to firebase.database().ref push unique id in same unique id firebase
razor add short if
criteria query order by
initialize hash with 0 value ruby
update the same custom field without duplicates
filter get in spring boot for particular range
uilabel without constraint
what is the conflict called that occurs in onself
dump all variable in view codeigniter
tax query or
add class on column depend on column count
firebase database get child count
If (e.KeyChar < Chr(48) Or e.KeyChar > (57)) And e.KeyChar <> Chr(8) Then e.Handled = True End If
is not declared as a table, projection view, or database view in ABAP Dictionary or does not exist in an active ve
List of Pydantic model. List[BaseModel]
create table check constraint input expect these values
sequelize contains
jqgrid set filter programmatically
EF groupby dictionary
find_by column name rails route
entry criteria
switch order of fields in assertions intellij
number of pagination using preceding sibling
for and if nested together
hibernate repository findby multiple fields
decision tree drools using spring boot
firestore find by part of string
python list comprehension with filter example 2
Select id,name,(select id from contacts)(select id from Opportunity) from Account
clone table structure
joinAlias nHibernate
golang loop over a channel
eloquent where parentheses
salesforce apex loop through fields of an object
where are dictd dictionaries
hibernate vs suspend
Dictionary Check Key Exist
create user oracle hash by value
seed single seeder
@jsondeserialize annotation
laravel has many through relationship
how to make unique column in elixir
sro manyang unique
The value of 'ordering[0]' refers to 'username', which is not an attribute of 'users.User'
ruby on rails db column contains string
unique array by multiple properties
The index contains 1119 leaf fields (fields of a non-complex type)
How to chain ternary operators
tax query and
spring @value list of strings
Querying by id in Firestore
deseq2 design two conditions
get collection select qurey
entity fast insert recordset
sequelize special methods
update map sclaa
how to make _id elasticsearch with custom value
Which of the following method is used to remove all key/value pair map?
pool map iterator
linq contains null
@query with generics
get-itemproperty select-object HKEY_USERS\*\VirtualStore
query params
fastest way to check if item in hash set
core likema
use variable in hibernate query request
multiple class attribute conflict 2
add query in code igniter
setting default values for all fields of a model
--all-or-nothing[=ALL-OR-NOTHING] command doctrine
GFAPI::get_entries
How to create Group of Groups in TestNG?
golang hashmap example
iterate through dict with condition
conditions with isin function
convert array to Illuminate\Http\Request
haskell roatate list
monday.com mutation dropdown list
shoulda matchers foreign key
Dynamically limiting queryset of related field
Laravel model retrieve
cloudinary has_many photos
laravel collection load
multiple categories on distploy
django cms filter vs get
Your filters contain a field ' createdAt.desc' that doesn't appear on your model definition nor it's relations"
@PrimaryKeyColumn(ordinal = 3,type = PrimaryKeyType.CLUSTERED)
Same diffrence
dictionary of all english words for my project
recurrence relation from code
Map put() method
reverse data query apollo not working
django two foreignkeys to same model admin error
php undefined function mysqli_fetch_all()
can i be a model
racket tuple
Proper Case django template
check if post[] exist
sequelize order by nulls last
“no such column” after adding a field to the model
extract specific tuple values from two different keys from nested dictionary
find the property is matching
Related Table formula
querysoftke
belongs to many select columns
sugarcrm translation list key value
Conditional Clause in query, run query if something true with multiple clause
update map scala
dbml name attribute of the Type element is already used by another type.
class Solution(object): def minOperations(self, boxes): """ :type boxes: str :rtype: List[int] """
twig get object keyz
moq return mockset
The row in table 'cal_event' with primary key '1' has an invalid foreign key: cal_event.event_creator_id contains a value '0' that does not have a corresponding value in auth_user.id.
other names of unique key
haproxy and condition example
UserModel._default_manager.filter(**{case_insensitive_username_field:username}).exists()):
firebase query multiple keys
one to many relationship return persistent collection with empty array collection symfony
joining multiple documents in cloud firestore
: s = 'Hi hi hi bye bye bye word count’ sc.parallelize(seq).map(lambda word: (word, 1)).reduceByKey(add).collect()
if dictonnairy contain key
An unhandled exception occurred: Cannot find module 'json-schema-traverse'
Django url with primary key
Query method parameters should either be a type that can be converted into a database column or a List / Array that contains such type. You can consider adding a Type Adapter for this.
List Comprehension build a list of tuples
public CustomAdapter(Context context, ArrayList<HashMap> data, int resource, String[] from, int[] to)
use_key_in_widget_constructors
secondary namenode vs standby namenode
the primary key is selected from the
laravel collection modelKeys
razor hash
search a string from an array in firestore
selenium get by many class
filter vs get django
(models.W042) Auto-created primary key
Deep copy HashMap with mutable objects inside
Data returned from queries is read-only
add where clause in elasticsearch
drupal class resolver
hql with case sum inside constructor
''.join(s)
gdscript dictionary duplicate
How to save a ManyToManyField as a value instead of id
meaning of clause
Storing Data within your TileEntity
maptoint vs map
difference between multiple and multilevel inheritance
iterate through multiple lists django
IDbConnection get count
Map to Object
delcare array of strings in graphql schema
Print each index value in the hash table followed by all the key fields (names) of the entries stored at that index.
querySelector wild card
multiselect_0
larvel create MD5Hasher
examole of a schematic model
Laravel retrieving aggregates
eachfeature leaflet
RowoverDuplicates
the_fiel
Return id or some value after INSERT
get native query result to map column names
you can't specify target table for update in from clause delete
There are multiple records in a table and some are duplicates. Which command will fetch only one copy of the duplicate records? Pick ONE option SELECT DISTINCT SELECT UNIQUE SELECT DIFFERENT All of the above
tuple merging
mapped method odoo
JPA hibernates left join
linq query two conditions
linq deleteonsubmit only deletes relation
what is hibernate sequence table
comparing oblects
laravel collection makeVisible
Repository Query
referencing column and referenced column in foreign key constraint are incompatible.
Nette mapping
update a dictionary using pymongo
factual description on my role model
groupby nested linq
lodash groupby unique array of objects
chain id
ruby flatten only 1 level
delete row entity jpa java
duplicate key value violates unique constraint "django_migrations_pkey"
linq to update a list
make nested dict from two dict
$ in haskell
doctrine findby criteria
create unique set of array of objects
ValueError: If using all scalar values, you must pass an index
difference between primary key and foreign key
objectMapper read list of type
self join
firestore generate unique id
parent soql query
duplicate entry
modele sequelize
constraint
select query into left join doctrine
findOne
Write Number in Expanded Form You will be given a number and you will need to return it as a string in Expanded Form. For example:
lst3 = [value for value in lst1 if value in lst2] meaning
select all from table where column equals to value
@Model.First() is not working
sellect one proporty linq
sp to create unique id if a new value is found
read rdata and assign different name to objects
multivalued solr field distance measure
if sma_20 > sma_50: if context.aapl not in open_orders order_target_percent(context.aapl,1.0)#order_target_percent(card,% of profoil)'''
idl else
How to query two separate metrics from different entities in new relic
matchcollection join all values into one
value does not fall within the expected range. sharepoint join spquery
duplicate data found in servlet
one lin3r
String#count_sentences returns the number of sentences in a complex string Failure/Error: expect(complex_string.count_sentences).to eq(4)
splunk rex field=_raw
handlesubmit is not a function grocery list
select * from key_value where name = "block.deleted";
why is my primary key displaying as none
property.resources
docker list fo dockiers
how to get list of docker containers
latex text size
jake paul
Logan Paul
logan paul net worth
markdown table
adblock
github authentication token
github token
gold color code
gold hex code
tkinter.destroy
Error in file(file, "rt") : cannot open the connection
signaure pad convert to img
chocolatey
`FirebaseAnalytics` requires CocoaPods version `>= 1.10.0`, which is not satisfied by your current version, `1.5.2`.
types of Shadersopen gl
would clobber existing tag
xpath attribute equal to partial match
pillow read from ndarray
microsoft word code formatting
how to change action bar color in android studio
for num in range(10, 14): for i in range(2, num): if num%i == 1: print(num) break
outline text bootstrap
How to Generate SHA -1 fingerprint certificate for firebase android project
what is distortion in data communication
name mgr is not defined
throttling
use certbot to generate certificate
command errored out with exit status 1:
terraform variable types
cryptophane delete secret key
android code optimization techniques
print short in c
header ejs
disney
git for nube
entete
null error due to delay in api response
acceptance angle
paste clipboard nano
DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
log message from jni
github portfolio
how to open chrome browser in selenium
seguridad de los equipos en iso 27002
best email templates
action upton execution of ok button in msg box win forms
nginx The 'Access-Control-Allow-Origin' header contains multiple values ',', but only one is allowed.
ultimate power mode
xlm xpath
react hooks api
what is amd module used for
__dopostback
clip video using ffmpeg
puppet download mac
what causes covid
composer xampp windows
spider man no way home premiere date
htcondor daemons not running
tree command levels
best free games
passing reference to thread c++
address regex
erreur de segmentation (core dumped)
disable prietter for lines
dark green rgb code
div overlaps when using 100vh
increase spacing in pie chart title in high chart
hmmm
The equation of tangent passing through the point the point (2, 0)and slope "-2" is
the default 'list' constructor isn't available when null safety is enabled flutter
chaika
Mdb Buttons
kafka list topics
sourcebin
how to put stuff in the console
simver
Message: 'chromedriver' executable needs to be in PATH.
add war support hoi4
the import org.mockito.junit.mockitojunitrunner cannot be resolved
select random 4 character from string in excel vba
how to change placeholder color of select
mac rename file to file creation date
open chrome in insecure mode
ienumerable vs iqueryable vs ilist vs icollection
plmoknijbuhvygctfxrdzeswaq
AWS SAM Webpack Plugin
No LSB modules are available
edificio felicita são carlos alugar
flutter + const_constructor_param_type_mismatch
how to insert an image in markdown
import multiple images in parcel
minecraft bedrock save location
bootstrap link no underline
javax.net.ssl.trustStore trustall
super smash bros brawl
indolent
magento 2 find orders for a product
how to generate an onclose jest
Your Flutter application is created using an older version of the Android embedding. It's being deprecated in favor of Android embedding v2. Follow the steps at
Ins= contract.x_studio_social_security_basic_salary if ins = 1500 result = contract.x_studio_social_security_basic_salary*100
timber first letter uppercase
glyphicon-user bootstrap 4
delete rows from string vector r according to regex
Draxon
/replaceitem
react-select input color
joinAlias nHibernate
cannot execute binary file
WSL2 request to failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org
précelle dentaire
miata
iso-8601
how to initialize a queue in c
lp show default media
keychain locked macos
(504, b'5.5.2 <webmaster@localhost>: Sender address rejected: need fully-qualified address')} alllauth
preloader not fading out
js js
npm kill 3000
xaml trigger
keep inventory command
asdfghjkl
two condition inside media query
entitymanager merge example
JobQueue telebot
golang validation struct
diamond color code
How to get text of an element in Selenium WebDriver, without including child element text?
how to make controls for a game
pub run twice problem
increase svg thickness
how to get height in pyqt5
heroku variables console
Error: package or namespace load failed for ‘ggplot2’ in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]): namespace ‘rlang’ 0.4.7 is already loaded, but >= 0.4.10 is required
reinstall windows store
do set rendredrd symfony
redefinition of 'const char* ssid'
naza
Glob more than one pattern
what is big data
emacs: delete files from emacs "find recent" list
politik probleme niedersachsen 2021
check public ip txt
$result = $distance[$cityA][$cityB];
pruning meaning
load mongo db on a google cloud VM
select multi columns pandas
emoji pycharm
undefined $ jquery rails
image bad when scaled in pygame
objectmapper write value as stirng human readable
arrow functions javascript
content padding field text flutter
how to remove variable in postman
239783496
rockstar games
how to get the last element of an array
what is the livereload port
multiples of 10
dreamweaver
showDescriptionErrors(){ const descriptionForm = this.postForm.get('description'); if (descriptionForm?.touched && !descriptionForm.valid){ if (descriptionForm.errors.required){ return 'Description is required'; }
why semiconductor shortage
mongo db connection
a=[5,10,15,25] print(a[::-2])
erpnext get password field
how to select multiple lines in vsc
mongoose encryption npm
pdf to jpg terminal
turbo warp help
to print with x decimal places
how to check callback is fired rtl
Galaxy A11
Rolling sum
bootstrap 5 link
Warning: require(/home/../wp-includes/rest-api/endpoints/class-wp-rest-application-passwords-controller.php): failed to open stream:
what are testing tools
deluge form object
crypto
keep my head afloat lyrics
number of spanning tree gfg
get first word of a string before space flutter
remove input x
matlab create cell array of strings
throughput in os
reference to textmeshpro
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors.
z-index inside z-index
disadvantages of selenium
unable to open image permission denied
Failed to determine a suitable driver class mongodb
RGB Fusion 2.0 Utility Download
translate
a href from database
image-set css
how to execute script with atom
get coordinates of a point qgis
SPDX license identifier not provided in source file.
outparse pip
hack sitw
vba create directory if not exist
iphone 12 switzerland availability
2022
honeygain
completely unlocked firestore
python group by
i'm not getting decimal in postgres
how to find a list of columns containing null values
shortcut to rename the file on lenovo s340
hasown
.
|
https://www.codegrepper.com/code-examples/whatever/serialization+in+api
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Signals and Slots¶
Due to the nature of Qt,
QObjects require a way to communicate, and that’s
the reason for this mechanism to be a central feature of Qt.
In simple terms, you can understand Signal and Slots in the same way you interact with the lights in your house. When you move the light switch (signal) you get a result which may be that your light bulbs are switched on/off (slot).
While developing interfaces, you can get a real example by the effect of clicking a button: the ‘click’ will be the signal, and the slot will be what happens when that button is clicked, like closing a window, saving a document, etc.
Note
If you have experience with other frameworks or toolkits, it’s likely that you read a concept called ‘callback’. Leaving the implementation details aside, a callback will be related to a notification function, passing a pointer to a function in case it’s required due to the events that happen in your program. This approach might sound similar, but there are essential differences that make it an unintuitive approach, like ensuring the type correctness of callback arguments, and some others.
All classes that inherit from
QObject or one of its subclasses, like.)
Qt’s widgets have many predefined signals and slots. For example, QAbstractButton (base class of buttons in Qt) has a clicked() signal and QLineEdit (single line input field) has a slot named ‘clear()`. So, a text input field with a button to clear the text could be implemented by placing a QToolButton to the right of the QLineEdit and connecting its clicked() signal to the slot ‘clear()`. This is done using the connect() method of the signal:
button = QToolButton() line_edit = QLineEdit() button.clicked.connect(line_edit.clear)
connect() returns a QMetaObject.Connection object, which can be used with the disconnect() method to sever the connection.
Signals can also be connected to free functions:
import sys from PySide6.QtWidgets import QApplication, QPushButton def function(): print("The 'function' has been called!") app = QApplication() button = QPushButton("Call function") button.clicked.connect(func) button.show() sys.exit(app.exec())
Connections can be spelled out in code or, for widget forms, designed in the Signal-Slot Editor of Qt Designer.
The Signal Class¶
When writing classes in Python, signals are declared as class level
variables of the class
QtCore.Signal(). A QWidget-based button
that emits a clicked() signal could look as
follows:
from PySide6.QtCore import Qt, Signal from PySide6.QtWidgets import QWidget class Button(QWidget): clicked = Signal(Qt.MouseButton) ... def mousePressEvent(self, event): self.clicked.emit(event.button())
The constructor of
Signal takes a tuple or a list of Python types
and C types:
signal1 = Signal(int) # Python types signal2 = Signal(QUrl) # Qt Types signal3 = Signal(int, str, int) # more than one type signal4 = Signal((float,), (QDate,)) # optional types
In addition to that, it can receive also a named argument
name that defines
the signal name. If nothing is passed, the new signal will have the same name
as the variable that it is being assigned to.
# TODO signal5 = Signal(int, name='rangeChanged') # ... rangeChanged.emit(...)
Another useful option of
Signal is the arguments name,
useful for QML applications to refer to the emitted values by name:
sumResult = Signal(int, arguments=['sum'])Connections { target: ... function onSumResult(sum) { // do something with 'sum' }
The Slot Class¶
Slots in QObject-derived classes should be indicated by the decorator
@QtCore.Slot(). Again, to define a signature just pass the types
similar to the
QtCore.Signal() class.
@Slot(str) def slot_function(self, s): ...
Slot() also accepts a
name and a
result keyword.
The
result keyword defines the type that will be returned and can be a C or
Python type. The
name keyword behaves the same way as in
Signal(). If
nothing is passed as name then the new slot will have the same name as the
function that is being decorated.
Overloading Signals and Slots with Different Types¶
It is actually possible to use signals and slots of the same name with different parameter type lists. This is legacy from Qt 5 and not recommended for new code. In Qt 6, signals have distinct names for different types.
The following example uses two handlers for a Signal and a Slot to showcase the different functionality.
import sys from PySide6.QtWidgets import QApplication, QPushButton from PySide6.QtCore import QObject, Signal, Slot class Communicate(QObject): # create two new signals on the fly: one will handle # int type, the other will handle strings speak = Signal((int,), (str,)) def __init__(self, parent=None): super().__init__(self, parent) self.speak[int].connect(self.say_something) self.speak[str].connect(self.say_something) # define a new slot that receives a C 'int' or a 'str' # and has 'say_something' as its name @Slot(int) @Slot(str) def say_something(self, arg): if isinstance(arg, int): print("This is a number:", arg) elif isinstance(arg, str): print("This is a string:", arg) if __name__ == "__main__": app = QApplication(sys.argv) someone = Communicate() # emit 'speak' signal with different arguments. # we have to specify the str as int is the default someone.speak.emit(10) someone.speak[str].emit("Hello everybody!")
Specifying Signals and Slots by Method Signature Strings¶
Signals and slots can also be specified as C++ method signature strings passed through the SIGNAL() and/or SLOT() functions:
from PySide6.QtCore import SIGNAL, SLOT button.connect(SIGNAL("clicked(Qt::MouseButton)"), action_handler, SLOT("action1(Qt::MouseButton)"))
This is not recommended for connecting signals, it is mostly used to specify signals for methods like QWizardPage::registerField():
wizard.registerField("text", line_edit, "text", SIGNAL("textChanged(Q.
|
https://doc.qt.io/qtforpython/tutorials/basictutorial/signals_and_slots.html
|
CC-MAIN-2022-05
|
en
|
refinedweb
|
Testing Android App Upgrades
Previously, we explored how to test app upgrades on iOS, and this week we're going to do the same for Android. What's the deal with app upgrades? The idea is that as testers, our.
Appium comes with some built-in commands to handle this kind of requirement. Fortunately, on Android it's even simpler than on iOS! To demonstrate, let's go back to the old version of The App, specifically v1.0.0. This version contains just one feature: a little text field which saves what you write into it and echoes it back to you. It just so happens that this data is saved internally using the key
@TheApp:savedEcho. (The specifics of how this data is saved have to do with React Native and aren't important---just imagine that your dev team has a way to save local user data that might need to persist between upgrades). (in this case, me). Let's pretend that like a good developer I initially forgot to write the migration code after changing the data storage key. I then release v1.0.1. This version will unfortunately contain the bug I described above (of the missing text) due to the forgotten migration code---even though as a standalone version of the app it works fine. Eventually I realize the mistake, and write the migration code. I can't re-release v1.0.1, so I release the fix as v1.0.2.
At this point, the testers, having been burned by my incompetence as a developer, two methods:
driver.installApp("/path/to/apk");
// 'activity'.
The only wrinkle);
For our example, to create a passing app upgrade test we'll of course need two apps: our original version and the version to upgrade to. In our case, that's v1.0.0 and v1.0.2 of The App:
private String APP_V1_0_0 = "";
private String APP_V1_0_2 = "";
I'm happily using GitHub asset download URLs to feed into Appium here. Assuming we've started the test with
APP_V1_0_0 as our
app capability, the duo of app upgrade commands then looks like:
driver.installApp(appUpgradeVersion);
Activity activity = new Activity(APP_PKG, APP_ACT);
driver.startActivity(activity); for the implementation of this full flow, including boilerplate:
import io.appium.java_client.MobileBy;
import io.appium.java_client.android.Activity;
import io.appium.java_client.android.AndroidDriver;009_Android_Upgrade {
private String APP_PKG = "io.cloudgrey.the_app";
private String APP_ACT = "com.theapp.MainActivity";
private String APP_V1_0_0 = "";
private String APP_V1_0_1 = "";
private String APP_V1_0_2 = "";
private String TEST_MESSAGE = "Hello World";
private By msgInput = MobileBy.AccessibilityId("messageInput");
private By savedMsg = MobileBy.AccessibilityId(TEST_MESSAGE);
private By saveMsgBtn = MobileBy.AccessibilityId("messageSaveBtn");
private By echoBox = MobileBy.AccessibilityId("Echo Box");
public void testSavedTextAfterUpgrade () throws IOException {
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("platformName", "Android");
capabilities.setCapability("deviceName", "Android Emulator");
capabilities.setCapability("automationName", "UiAutomator2");
capabilities.setCapability("app", APP_V1_0_0);
// change this to APP_V1_0_1 to experience a failing scenario
String appUpgradeVersion = APP_V1_0_2;
// Open the app.
AndroidDriver driver = new AndroidDriver.installApp(appUpgradeVersion);
Activity activity = new Activity(APP_PKG, APP_ACT);
driver.startActivity(activity);
wait.until(ExpectedConditions.presenceOfElementLocated(echoBox)).click();
savedText = wait.until(ExpectedConditions.presenceOfElementLocated(savedMsg)).getText();
Assert.assertEquals(savedText, TEST_MESSAGE);
} finally {
driver.quit();
}
}
}
(Note that I included the option of running the test with the version of the app that has the bug, namely v1.0.1, which would produce a failing test.)
As always, you can find the code inside a working repository on GitHub. That's it for testing app upgrades on Android! Remember to check out the previous tip on how to do the same thing with iOS.
|
https://appiumpro.com/editions/9
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Welcome to the Rockbox Technical Forums!
You should ask the foobar people then.
If you just put the playlist in some folder below the one with the files it should work even without relative paths.
hard to believe there is no tool that will create relative m3us or correct the path later.
Shouldn't be hard to write a converter to change path names in a couple of lines of Python code. Or a similar scripting language.
I've use win's Notepad editor Find & Replace funtion for m3u's.
sure, i should have better written automatically. does there exist an editor where you can define a path (and save it) so just to hit ''find & replace''?
yes and i wish i could write such a code..
import sysimport osinfile = sys.argv[1]basepath = os.path.dirname(os.path.abspath(infile))outlines = []fp = open(infile)for line in fp.readlines(): if line.startswith('#'): # m3u comments start with # outlines.append(line) else: outlines.append(os.path.relpath(line, basepath))fp.close()fp = open(infile, "w")for line in outlines: fp.write(line)fp.close()
Page created in 0.077 seconds with 65 queries.
|
http://forums.rockbox.org/index.php/topic,42856.msg218003.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
One, based.
> Anytime I put an assertion into my code, it’s a tacit acknowledgment that I don’t have complete trust that the property being asserted actually holds.
That is not what assert() means to me. When I write assert(), it means that I, the writer, do absolutely believe that the condition is always true and that you, the reader, ought to believe it as well. If I have doubts about the truth of the condition, I’ll write something like:
if( NEVER(condition) ){ … remedial action… }
The NEVER() macro is so defined as to be a pass-through for release builds, but calls abort() for debug builds if the condition is true.
An assert() is an executable comment. The fact that it is executable gives it more weight because it is self-verifying. When I see a comment like “/* param is never NULL */” I am much less likely to believe that comment than when I see “assert( param!=0 )”. My level of trust in the assert() depends on how well-tested is the software, but is almost always greater than my level of trust in the comment.
Used in this way, assert() is a very powerful documentation mechanism that helps to keep complex software maintainable over the long term.
That is not how I view the use of assert(). Basically, if the condition is not true, stop before irreparable damage occurs or the system cannot recover in any meaningful way.
The idea of an ‘executable comment’ doesn’t make sense to me. Any code, not used for the direct purpose of the system, is just another failure point.
Hi Richard, if the assertion is guaranteed to be true, I guess I don’t understand why you would bother making it executable, instead of just leaving a comment? In many cases an English comment would be a lot more readable.
I agree more with Pat: an assertion is not only documentation but also part of a defense-in-depth strategy against things going wrong, whether it is a logic error, some sort of unrepeatable state corruption, a compiler bug, or whatever.
Of course I agree that an assertion is never used for detecting things that can legitimately go wrong in sane executions of the program.
assert is not defense in depth because it is compiled out of release builds which will encounter malicious input. It is a way to write pre- and post-conditions or even contracts that are used to debug software to catch/pinpoint/isolate errors as soon as possible instead of letting bad data affect state somewhere down the line and then not know how we got the incorrect/invalid state in the first place.
> Any code, not used for the direct purpose of the system, is just another failure point.
Agreed and for this reason I use the (non-default) behavior of disabling assert() for release builds. (Mostly. In applications where assert()s are less frequent, where the application is less well-tested, and where the assert()s do not impose a performance penalty I will sometimes be lazy and leave them in releases.) You obviously cannot achieve 100% MC/DC if you have assert() enabled in your code.
An assert() is a statement of an invariant. The more invariants you know about a block of code or subroutine, the better you are able to reason about that block or subroutine. In a large and complex system, it is impractical to keep the state of the entire system in mind at all times. You are much more productive to break the system down into manageable pieces – pieces small enough to construct informal proofs of correctness Assert() helps with this by constraining the internal interfaces.
Assert() is also useful for constraining internal interfaces such that future changes do not cause subtle and difficult-to-detect errors and/or vulnerabilities.
You can also state invariants in comments, and for clarity you probably should. But I typically do not trust comments that are not backed up by assert()s as the comments can be and often are incorrect. If I see an assert() then I know both the programmer’s intent and also that the intent was fulfilled (assuming the code is well-tested).
I was starting to write out some examples to illustrate my use of assert(), which I previously thought to be the commonly-held view. But it seems like the comment section for EIA is not the right forum for such details. I think I need to write up a separate post on my own site. I have a note to do that, and will add a follow-up here if and when I achieve that goal.
That some of the best practitioners in the field do not view assert() as I do is rather alarming to me. I have previously given no mind to golang since, while initially reviewing the documentation, I came across the statement that they explicitly disallow assert() in that language. “What kind of crazy, misguided nonsense is this…” I thought, and read no further, dismissing golang as unsuitable for serious development work. But if you look at assert() as a safety-rope and not a statement of eternal truth, then I can kind of see the golang developers’ point of view.
So I have the action to try to write up my view of assert() (which I unabashedly claim to be the “correct” view :-)) complete with lots of examples, and provide a link as a follow-up.
Surely we are all in agreement that assert() should never be used to validate an external input.
> Anytime I put an assertion into my code, it’s a tacit acknowledgment that I don’t have complete trust that the property being asserted actually holds.
I wouldn’t phrase it this way, because it looks like an invitation to write less assert() in an attempt to fake the certainty of a condition.
In my view, a condition is what’s required for next piece of code to work properly. If it is expected to be always true, that’s exactly the right moment to write an assert(). Otherwise, this is the domain of error trapping and management.
The assert is one of the best “self-documented piece of code” I can think of. It can, and should, reduce the need for some code comments.
A real code comment can come on top of that, to give context, on why this condition is supposed to be true, and why it is required, when that’s useful.
But in many case, it’s not even necessary.
assert(ptr!=NULL); is often clear enough.
assert() become vital as software size and age grows, with new generation of programmers giving their share to the code base.
An assert might be considered a “superfluous statement of truth” for a single programmer working on its own small code base. After all, it might be completely obvious that this ptr is necessarily !=NULL.
But in a different space and time, another programmer (which might be the same one, just a few years older) will come and modify _another_ part of the code, breaking this condition inadvertently. Without an assert, this can degenerate into a monstrous debug scenario, as the condition might trigger non-trivial side effects, sometimes invisible long after being triggered.
The assert() will catch this change of condition much sooner, and help solve the situation much quicker, helping velocity.
Which means, I feel on Richard’s side on assert() usage. I would usually just keep it for me and not add to the noise, but somehow, I felt the urge to state it. I guess I believe it’s important.
I agree with the defense-in-depth view of asserts. For this reason they are not compiled out in my release builds and I have a test to ensure that this stays true. Compiling them out would mean test/debug and release builds having different semantics, which I don’t find acceptable. You can imagine what I think about the semantics of Python’s assert!
From an industry perspective on “how other CS instructors are dealing with these issues” (of understanding when and how to deploy particular defensive measures), internal training courses I co-developed try to situate detailed coding advice in a broader security engineering context. It includes threat modeling as a general practice, with a specific set of advice for our lines of business. We also frame security efforts as part of normal engineering work, so subject to the same kinds of tradeoffs practitioners make all the time, and approachable using the same families of tools (automated testing, code review, automated deployment and rollback, usability testing, defect tracking, etc.) that are already established for regular work. Of course there are things which are more specialized, and the adversarial context is important for the way we think about security problems, but this is setting the tone.
In structuring the material in this way, we hope to make it more generally valuable than a more rigid list of recommendations. The additional engineering context is meant to make the advice more applicable to novel situations, and to prompt people to think about security engineering as something where they have a voice – not just receiving directives.
Specifically to defensive measures, the material includes prompts for people to think more like an attacker. This starts with the threat modeling, where people are asked to go through the systems they work with at an architectural level, figure out avenues of attack, assess the consequences of attacks, and consider various countermeasures. These sessions often bring out findings relating to trust boundaries in the way you describe. At a more advanced or specialized level we have course offerings which involve hands-on penetration testing, which is a really useful way to foster an understanding of the kinds of attacks which are possible, as well as exploring non-obvious parts of the attack surface.
Therefore, at the code level, some of the discussion above about assertions ends up being conditioned (haha) on how the code in question is being developed and run, and what it does. There are certainly specific practices and gotchas around assert() in particular which are good to know. But people should also be able to negotiate through the context of what the assert is trying to prevent; how likely it is; how bad it would be if it happened; what combination of language features, unit tests, integration tests, smoke tests, etc., are appropriate to use; what we expect to happen in the abort case (is something logged? is there an alert? who gets the alert and what are they meant to do? what does the normal user see while this is happening?); and so on.
Richard, I look forward to your writeup! I do indeed hope that assertions represent eternal truths, but I also have seen that hope dashed too many times by factors beyond my (previous) understanding.
Yann, I suspect we largely agree (as I think Richard and I mostly do). The distinctions here are subtle ones.
Alex, thank you for returning us to the topic that I had hoped to discuss :). The internal courses you describe sound really interesting, I would love to learn more. The teaching scenario you describe, where we strongly encourage people to think both like an attacker and a defender, is a really nice way of putting it. That is what I also try to do.
I’m teaching a Security (which, this being me, means some Anderson readings + lotsa static and dynamic analysis for “good” and “evil”) class for the first time. I think an important distinction to get across is whether mistrust is due to:
– 1. error only
or
– 2. an adversary
In the first case, probability may help you out; in the second, it does not matter, an adversary drives P(trust violated) to 1. Mistaking the relatively rare case 1 for case 2 is a concept to drive home.
I learned about “assert” as a beginning programmer from Kernighan and Plauger’s books “The Elements of Programming Style” and “Software Tools” (not sure which, or both) back in the day. Obviously nobody is infallible including them, but they advocated asserts as “defense in depth” and leaving them enabled in release builds, using the comparison that disabling them in release builds was like wearing a parachute on the ground but taking it off once your plane was in the air. They said their asserts tripped many times during development and kept them from going too far wrong.
That said, I don’t understand how C has survived into the present day, when we could have so much more sanity and static safety. I haven’t tried Rust yet but I’ve played with Ada a little, and it seems preferable to C in a lot of ways.
> If you fail to recognize and properly fortify an
> important trust boundary, it is very likely that
> someone else will recognize it and then exploit it.
I think I disagree with “very,” and my uncertainty is part of the problem. In my extremely limited pre-Snowden experience using static analysis for security concerns, a glaring gap was the one between a bug and an externally-exploitable vulnerability. We didn’t have a good way to rank the bugs, and Snowden’s leaks suggest that we needed to worry about the many not-very-likely ones as well as the few very-likely exploitables. (I’m taking it for granted that “Fix All The Bugs” is a slogan rather than a plan.)
Perhaps one of the most important trust boundaries is between code and data.
We used to think that commingling code and data was good. Early computers (I’m thinking of the IBM 701) had no index instructions and so array operations had to be accomplished using self-altering code, and everybody thought that was super-cool because it reduced the number of vacuum tubes. In the 70s and 80s everybody was raging about how great Lisp was, because it made no distinction between code and data. Javascript has eval() because less than 20 years ago everybody thought that was a great idea, though now we know better and hence disable eval() using CSP.
I spend a lot of time doing SQL. An SQL statement is code – it is a miniature program that gets “compiled” by the query planner and then run to generate the answer. But many SQL statements are constructed at run-time using application data. This commingling of code and data often results in SQL injection attacks, which are still one of the leading exploits on modern systems.
The mixing of code and data is a exceedingly powerful idea. It forms the essential core of Godel’s incompleteness theorem, to name but one prominent example. It is a seductive and elegant idea that can easily lure in unwary. So perhaps the trust boundary between code and data should be given special emphasis when talking about security?
Thanks for the comments, folks, this is all good material!
Efficiency and safety could both be improved by having a means of telling the compiler that execution must be trapped and aborted if a condition doesn’t hold when this point is reached, but execution may at the compiler’s convenience be aborted at almost any time the compiler can determine that it will inevitably reach some kind of an abort-and-trap. Granting such freedom to a compiler could greatly reduce the run-time cost associated with such assertions by not only allowing them to be hoisted out of loops, but also by allowing a compiler that has generated code to trap if an assertion is violated to then exploit the fact that the assertion holds within succeeding code.
For example, given:
for (int i=0; i<size; i++)
{
action_which_cannot_normally_exit();
__EAGER_ASSERT(i = b && (a-b) < 1000000);
return (a-b)*123 + c;
}
a compiler could at its leisure either perform all of the computations using 64-bit values or trap if the parameters are out of range and then perform the multiply using 32-bit values.
Precise trapping for error conditions is expensive, but allowing traps to be imprecise can make them much cheaper. A language which automatically traps dangerous conditions imprecisely may be able to generate more efficient code than one which requires that all dangerous conditions be checked manually.
|
https://blog.regehr.org/archives/1576
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Code Mistakes: Python's for Loop
Code Mistakes: Python's for Loop
It's really exciting to learn a new language... up until you write something broken that you can't figure out how to fix. Here's something I overlooked recently in trying to make a Python for loop.
Join the DZone community and get the full member experience.Join For Free
Update: I've gotten a lot of great responses to this post that show better, more functional, and more pythonic ways to solve the problem I came across. I'm adding some of these responses to this post, as I know the first solution I came up with was not ideal, but rather a means of fixing an issue in my code I was having trouble understanding. Thanks everyone very much for all the responses.
I'm learning Python right now after years of using Java. It has actually helped a lot that I have been doing some work with Lisp and functional programming, as I think that has helped me bridge the gap between Python and Java. Still, there are some Java practices that I cling onto that cause me some trouble in a new language. Python's
for loop was one of those trouble-makers for me recently.
I actually really like the way Python handles
for loops, but years of Java conditioning had me a bit mixed up. I've gotten so used to defining an iterator, creating the end condition based on the iterator, and then defining how to iterate. For many applications in Python, there's no need to have all that detail. But it was the simplicity of the
for loop that ended up causing me problems.
Essentially, I was trying to take a long list of numbers and iterate over all possible equal-length consecutive segments of that list. In a very basic example, this would look like taking a list:
num_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9]
and finding every set of three consecutive numbers in that list:
123,
234,
456, and so on. So I wrote my Python to look somewhat like this:
for x in num_list: some_var = some_function(num_list[x:x+3]) if some_var > some_other_var: some_other_var = some_var final_list = num_list[x:x+3]
If you've used Python
for loops before, you can probably pretty easily tell what I'm doing wrong here. But if you haven't worked much with these loops, the issue might be a little harder to spot.
The issue here is that Python already knows how to iterate over every item in this list, because it knows what each item is. So it doesn't need me to tell it how to go through the list and get each item, it just does it for me.
This means that
x in
for x in num_list does not refer to the index of each iteration; it refers to the value at each index. But the function I'm performing, and the
final_list variable I'm defining, are trying to use these values as indexes. In the particular list that I gave as an example above, that means, during the first iteration of the loop, while I'm trying to work with
num_list[0], I'm actually working with
num_list[1], since
1 is at index
0 in that list.
In Java, I would have likely defined my iterator and used the iterator to retrieve the values within the loop. But this
for loop in Python is already giving me the value.
Since I still needed an iterator in order to define my index ranges, I ended up adding one within the loop logic:
i = 0 for x in num_list: some_var = some_function(num_list[i:i+3]) if some_var > some_other_var: some_other_var = some_var final_list = num_list[i:i+3] i += 1
This ends up getting me the indexes I need to find consecutive series within the list.
This
very well may not be is not the best way to handle this, and I'd love to hear other solutions that can help make my Python better. But I wanted to point out this difference in case others have had this kind of trouble when learning Python loops.
[Update]
Here are some better solutions presented by great DZone contributors. See the comments for more great discussion, and feel free to leave a comment yourself.
Marcin Cuprjak: There is a syntax in Python for that: enumerate. It gives index and enumerated object...:
for i, x in enumerate(num_list):
Tim Desjardins: A more pythonic way would be to do your for loop as:
for i in xrange(0, len(num_list)):
Less code, more concise.
John Henson: I too work with Java since your foundation (1996), but python is awesome in much areas that have tools and constructions that facilitate your life. If you need iterate for combinations in elements in a contiguous list you can use that in Python 2.7.1 or major:
from itertools import combinations for x in combinations(range(1,100),3): print x
If you have one list L you can do:
for x in combinations(L,3): print x
Andre Burgaud: Assuming you'd be inclined to approach the problem in a more functional fashion, the following might be a starting point:
num_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9] l = [(x, y, z) for (x, y, z) in zip(num_list, num_list[1:], num_list[2:]) if (x, y, z) == (x, x+1, x+2)] print(l)
final_triple = max((triple for triple in zip(num_list, num_list[1:], num_list[2:])), key=some_function)
Alternatively:
final_list = max((num_list[i:i+3] for i in xrange(0, len(num_list) - 2)), key=some_function)
Related Refcard:
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/code-mistakes-pythons-for-loop?fromrel=true
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
.
import { execute, subscribe } from 'graphql'; import { SubscriptionServer } from 'subscriptions-transport-ws'; import { schema } from './schema'; const validateToken = (authToken) => { // ... validate token and return a Promise, rejects in case of an error } const findUser = (authToken) => { return (tokenValidationResult) => { // ... finds user by auth token and return a Promise, rejects in case of an error } } const subscriptionsServer = new SubscriptionServer( { execute, subscribe, schema, onConnect: (connectionParams, webSocket) => { if (connectionParams.authToken) { return validateToken(connectionParams.authToken) .then(findUser(connectionParams.authToken)) .then((user) => { return { currentUser: user, }; }); } throw new Error('Missing auth token!'); } }, { server: websocketServer } );
The example above validates the user's token that is sent with the first initialization message on the transport, then it.
|
https://www.apollographql.com/docs/graphql-subscriptions/authentication/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Fun and Games with FreeBSD Kernel Modules - Kernel hacking using kernel modules and kmem patching. Contains information on how to intercept system calls and other calls in the kernel by altering the corresponding call table. Also shows how to alter these tables by writing to kernel memory and gives an example of patching the kernel directly without the use of modules. Furthermore an example is given on how the symbol table in the kernel can be altered.
1c02af353600d213d821553a35d81211
<html>
<head>
<title>Fun and Games with FreeBSD Kernel Modules</title>
</head>
<body bgcolor="#ffffff">
<h2>Fun and Games with FreeBSD Kernel Modules</h2>
v0.1b by Stephanie Wehner, 04/08/2001<br>
<br>
<p>
<a href="#Intro">1. Introduction</a><br>
<a href="#Intro-Kernel">1.1 Kernel Modules</a><br>
<a href="#Intro-Useful">1.2 Useful Functions</a><br>
<p>
<a href="#Methods">2. Techniques</a><br>
<a href="#Methods-Function">2.1 Replacing Function Pointers</a><br>
<a href="#Methods-Function-Sys">2.1.1. System Calls</a><br>
<a href="#Methods-Function-Other">2.1.2. Other Tables</a><br>
<a href="#Methods-Function-Single">2.1.3. Single Function Pointers</a><br>
<a href="#Methods-Lists">2.2. Modifying Kernel Lists</a><br>
<a href="#Methods-Kernel">2.3. Reading and Writing Kernel Memory</a><br>
<a href="#Methods-Kernel-Find">2.3.1. Finding the address of a symbol</a><br>
<a href="#Methods-Kernel-Read">2.3.2. Reading data</a><br>
<a href="#Methods-Kernel-Modify">2.3.3. Modifying kernel code</a><br>
<p>
<a href="#Common">3. Common Applications</a><br>
<a href="#Common-Files">3.1. Hiding & Redirecting Files</a><br>
<a href="#Common-Process">3.2. Hiding Processes</a><br>
<a href="#Common-Network">3.3. Hiding Network Connections</a><br>
<a href="#Common-Firewall">3.4. Hiding Firewall Rules</a><br>
<a href="#Common-Trigger">3.5. Network Triggers</a><br>
<a href="#Common-Module">3.6. Hiding the module</a><br>
<a href="#Common-Other">3.7. Other applications</a><br>
<p>
<a href="#Patching">4. Patching the kernel</a><br>
<a href="#Patching-Example">4.1. Introduction</a><br>
<a href="#Patching-Jumps">4.2. Inserting Jumps</a><br>
<a href="#Patching-Replace">4.3. Replacing Kernel Code</a><br>
<p>
<a href="#Reboot">5. Reboot Proofing</a><br>
<p>
<a href="#Experimental">6. Experimental</a><br>
<p>
<a href="#Defense">7. Defending yourself: The cat and mouse game</a><br>
<a href="#Defense-Symbol">7.1. Checking the symbol table</a><br>
<a href="#Defense-Trap">7.2. Building a Trap Module</a><br>
<a href="#Defense-Direct">7.3. Retrieving data directly</a><br>
<a href="#Defense-Remarks">7.4. Remarks</a><br>
<p>
<a href="#Conclusion">8. Conclusion</a><br>
<p>
<a href="#Code">9. Code</a><br>
<p>
<a href="#References">10. References</a><br>
<p>
<a href="#Thanks">11. Thanks</a><br>
<h3><a name="Intro"></a>1. Introduction</h3>
Kernel modules for FreeBSD have been around for quite some time, yet many people still consider them to be
rather obscure. This article will explore some ways to use kernel modules and kernel memory in order
to alter the behaviour of the system.
<p>.
<p>.
<p>
This text has been written for educational purposes only. Use with care :)
All the example code is available as a single package called Curious Yellow (CY) at the
end of this article.
<h4><a name="Intro-Kernel"></a>1.1 Kernel Modules</h4>
In short, kernel modules allow you to load new code into the kernel. Many of the examples
below use them to add new functions.
<p>
This text assumes you know the basics of how to write a FreeBSD kernel module. If you've never
worked with them before you might want to consult the <a href="">
Dynamic Kernel Linker (KLD) Facility Programming Tutorial</a> published in daemonnews or take a look
at the examples provided in /usr/share/examples/kld/ on your FreeBSD machine.
<h4><a name="Intro-Useful"></a>1.2 Useful Functions</h4>
If you've never done any kernel programming before, here are some functions that can come in very handy
and that are especially useful when dealing with system calls.
<p>
<h5>copyin/copyout/copyinstr/copyoutstr</h5>
<p>
These functions allow you to copy contiguous chunks of data from user space to kernel space and vice
versa. More detailed information can be found in their manpage (copy(9)) and also in the KLD tutorial
mentioned above.
<p>
Say you made a system call which also takes a pointer to a character string as an argument. You now
want to copy the user supplied data to kernel space:
<pre>
struct example_call_args {
char *buffer;
};
int
example_call(struct proc *p, struct example_call_args *uap)
{
int error;
char kernel_buffer_copy[BUFSIZE];
/* copy in the user data */
error = copyin(uap->buffer, &kernel_buffer_copy, BUFSIZE);
[...]
}
</pre>
<h5>fetch/store</h5>
If you just want to transfer small amounts of data, you might want to use the functions
described in fetch(9) and store(9). These functions allow you to transfer byte and word
sized pieces of memory.
<h5>spl..</h5>
The functions described in spl(9) allow you to manipulate interrupt priorities. This
allows you to prevent certain interrupt handlers from being run. In some later examples
the pointer to a function such as icmp_input for example is altered. If this takes multiple
steps you might want to block an interrupt while you're making changes.
<h5>vm_map_find</h5>
<h3><a name="Methods"></a>2. Techniques</h3>
This section lists some common methods that are later used to implement some of the
tricks like process hiding, connection hiding and more. Of course you could use
the same methods to do other things as well.
<h4><a name="Methods-Function"></a>2.1. Replacing Function Pointers</h4>
Probably the most commonly used technique is to replace function pointers
in the kernel to point to the newly loaded functions. In order to make use
of this method, a kernel module is loaded that carries your new function.
You can then swap your function for the original one when the module is loaded.
Alternatively you can later do the swap by writing to /dev/kmem (see below)
<p>
Since you're replacing an existing function, it is important that your newly
created function will take the same arguments as the original one. :) You can
then either do some pre or post processing while still calling the original
function or you can write a complete replacement.
<p>
There are a lot of hooks in the kernel where you can employ this method. Let's
just look at some of the commonly used places.
<h5><a name="Methods-Function-Sys"></a>2.1.1. System Calls</h5>
The classic case is to replace system calls. FreeBSD keeps a list of syscalls
in a struct called sysent. Take a look at /sys/kern/init_sysent.c, here's
a very short excerpt:
<pre> */
[...]
</pre>
If you want to know what struct sysent looks like and what the numbers of the
system calls are, check out /sys/sys/sysent.h and /sys/sys/syscall.h respectively.
<p>
Say you would want to replace the open syscall. In the MOD_LOAD section of your
load module function, you would then do something like:
<pre>
sysent[SYS_open] = (sy_call_t *)your_new_open;
</pre>
If you would want to restore the original call when the module is unloaded, you can
just set it back:
<pre>
sysent[SYS_open].sy_call = (sy_call_t *)open;
</pre>
A complete example will follow below.
<h5><a name="Methods-Function-Other"></a>2.1.2. Other Tables</h4>
The system call table is not the only place however you can mess with. There's a lot of
other interesting places in the FreeBSD kernel. Most notably are inetsw and the vnode tables
of the different filesystems.
<p>:
<pre>
inetsw[ip_protox[IPPROTO_ICMP]].pr_input = new_icmp_input;
</pre>
Again, a complete example will follow later.
<p>
For every filesystem a vnode table is kept that specifies what function is called for which VOP.
Say you wanted to replace ufs_lookup:
<pre>
ufs_vnodeop_p[VOFFSET(vop_lookup)] = (vop_t *) new_ufs_lookup;
</pre>
<p>
There's more places like this where you can hook in, depending on what you want to achieve.
The only documentation however is the kernel source itself.
<h4><a name="Methods-Function-Single"></a>2.1.3. Single Function Pointers</h4>
Occasionally there are also single pointers that are used to determine which function to call.
This is for example the case with ip_fw_ctl_ptr, which points to the function ipfw control
requests will go to. Again, this gives you another place to hook in.
<h4><a name="Methods-Modify"></a>2.2. Modifying Kernel Lists</h4>
Just replacing functions is by itself not a lot of fun. You might also want to alter
the data as it is known by the kernel. A lot of interesting things are stored as lists
inside the kernel. If you've never worked with the list macros as defined in /sys/sys/queue.h
before, you might want to familiarize yourself with them before continuing in this direction.
It will make it easier to understand the existing definitions you will encounter in kernel
code and it will also prevent you from making mistakes if you use these macros yourself.
<p>
Some of the more interesting lists are:
.
<p>
The linker_files list: this list contains the files linked to the kernel. Every link file
can contain one or more modules. This has been described in the <a href="">THC article</a>, so I won't go into that here. This list will become important
when we want to alter the address of a symbol or hide the loaded file with modules.
<p>
The module list: modulelist_t modules contains a list of the loaded modules. Note that this is
different from the files linked. A file can contain more then one module. This will also become
important when you want to hide your module.
<p>
Of course there's a lot more to be found in the kernel sources.
<h4><a name="Methods-Kernel"></a>2.3. Reading and Writing Kernel Memory</h4>
Modules are not the only way to get to things inside the kernel. You can also modify kernel memory
directly via /dev/kmem.
<h5><a name="Methods-Kernel-Find"></a>2.3.1. Finding the address of a symbol</h5>
When you deal with kernel memory you might first be interested in finding the correct
place of a symbol you might want to read or write to. In FreeBSD the kvm(3) functions
provide you with some useful tools to do this. Please consult the manpage on how to use
them. Below is a small example that will find the address given a symbol name. You can
also find this example in the CY package in tools/findsym.c.
<pre>
[...]);
}
[...]
</pre>
<h5><a name="Methods-Kernel-Read"></a>2.3.2. Reading data</h5>
Now that you have found the correct address, you might want to read data from it. You can do
so with kvm_read. The files tools/kvmread.c and tools/listprocs.c will provide you with an
example.
:
<pre>
[...]);
}
</pre>
<h5><a name="Methods-Kernel-Modify"></a>2.3.3. Modifying kernel code</h5>
In a similar fashion you can also write to kernel memory. The manpage on kvm_write will give
you more information. This basically works almost like reading. Later on in this text some
examples will be given. If you're impatient, you can also take a look at tools/putjump.c now.
<h3><a name="Common"></a>3. Common Applications</h3>
<h4><a name="Common-Files"></a>3.1. Hiding & Redirecting Files</h4>
One of the most common things to do is to hide files. This is one of the easier things to do,
so let's start with this.
<p>
There's multiple levels you can hook in your code in order to hide your files. One way is
via catching system calls such as open, stat, etc. Another way is to hook in at the lookup
functions of the underlying filesystem.
<h5>3.1.1. Via System Calls</h5>
This is the way it is usually done, and has been used by various tools and is also described
in the <a href="">THC article</a>.
For the calls that are directed at one specific file, such as open, stat,
chmod etc, this is very simple to do. You can add a new function, say new_open. In this
function you check the supplied filename. If this filename has certain characteristics,
eg it starts with a certain string, it will be hidden right away by returning a not found
error. Otherwise the original open function will be called. Example from module/file-sysc.c:
<pre>
/*
*));
}
</pre>
In the function file_hidden you can then check if the filename should be hidden. In the loader
of your module you then replace the call to open with new_open in the syscall table:
<pre>);
}
</pre> <a href="">THC article</a> and some other places, so I won't go into detail here.
<h5>3.1.2. Via the vnode tables</h5>
The other way of hiding files is via the functions of the underlying filesystem. This approach
has the advantage that you don't need to alter the syscall table and you can save yourself
some work as the lookup function will in the end be called from a lot of syscalls you would all
have to replace otherwise. Also, you might want to consider using some other atrribute of
the file then it's name to determine whether it should be hidden.
<p>.
<p>
Say you wanted all lookups on a UFS filesystem to go to your own function. First of
all you would make a new ufs_lookup. From module/file-ufs.c:
<pre>
/*
*));
}
</pre>
Then you would have to adjust the pointer in the vnode table when the module is loaded:
<pre>);
}
</pre>.
<h5>3.1.3. General Remarks</h5>
File redirection can be implemented in exactly the same way, just replace the file that's
requested with the one you want to give instead. If you want to do execution redirection
you have to change execve. This is quite easy to do. You have to catch execve and alter the
filename to execute that the user passed to it. Note that you might need to allocate more memory
in user space. This can be done with vm_map_find. CY contains an example on how to execute another
program from the kernel where this is used. You could easily adapt this to replace execve.
<h4><a name="Common-Processes"></a>3.2. Hiding Processes</h4>
<h5>3.2.1. Hiding</h5>
Another common thing to do is to hide processes. In order to achieve this you need to
intercept the various ways information about processes is obtained. Also you want to keep
track of which processes you want to hide. Every process is recorded in a struct proc. Check
out /sys/sys/proc.h for the complete structure. On of the fields is called p_flag, which allows
certain flags to be set for each process. One can therefore introduce a new flag:
<pre>
#define P_HIDDEN 0x8000000
</pre>
When a process is hidden, we'll set this flag so it can be recognized later. See module/control.c
for the CY control functions that hide and unhide a process.
<p>
If you do a ps, it will go to kvm_getprocs, which in return will make a sysctl with the following arguments:
<pre>
name[0] = CTL_KERN
name[1] = KERN_PROC
name[2] = KERN_PROC_PID, KERN_PROC_ARGS etc
name[3] can contain the pid in case information about only one process is requested.
</pre>
name is an array that contains the mib, describing what kind of information is requested:
eg what kind of sysctl operation it is and what is requested exactly.
In short the following sub query types are possible: (from /sys/sys/sysctl.h)
<pre>
/*
* */
</pre>
This will ultimately end up at __sysctl. The THC article also contains some information on this, but
since I had implemented it differently, I included some example code in module/process.c. This is
also the place where we'll hide network connections later.
<p> :)
<p>
However, similar to UFS one can also just fix the corresponding entries for procfs. Example code for this
is given in module/process.c. A new procfs lookup for example could look like this:
<pre>
/*
*));
}
</pre>
You would then replace it when you load the module:
<pre>);
}
</pre>
<h5>3.2.2. Hiding children and catching signals</h5>
You would probably want to make sure that descendants of a hidden process will also stay hidden.
Likewise you would want to prevent your hidden processes from being killed. For this you can
intercept the calls to fork and kill. These are system calls and can be replaced using the methods
described above, so I won't provide any code here. You can find examples in module/process.c.
<h3><a name="Network"></a>3.3. Hiding Network Connections</h3>
Hiding network connections from queries such as netstat -an, is very similar to hiding processes.
If this information is retrieved, it will also execute a sysctl, yet alone the mib will be
different. In case of TCP connections:
<pre>
name[0] = CTL_NET
name[1] = PF_INET
name[2] = IPPROTO_TCP
name[3] = TCPCTL_PCBLIST
</pre>
In exactly the same way as before, the data to be hidden will be cut out of the data retrieved
by userland_sysctl. CY allows you to specify various connections that should be hidden via cyctl.
See module/process.c for the __sysctl modifications.
<h3><a name="Common-Firewall"></a>3.4. Hiding Firewall Rules</h3>
Another fun thing to do is to hide firewall rules. This can be done quite easily by substituting
your own function for ip_fw_ctl. ip_fw_ctl is the ipfw control function that all requests like
adding, deleting, listing etc firewall rules will be passed to. We can therefore intercept these
requests in our own function and act accordingly.
<p>.
<pre>
#define IP_FW_F_HIDDEN 0x80000000
</pre>
<p>.
<p>.
<h3><a name="Common-Trigger"></a>3.5. Network Triggers</h3>
As described above there is more places in the kernel where you can swap the pointer
to an existing function for your own. One of these places is inetsw, which contains a
list of inet protocols and information about them, eg like the functions to call when a packet
of this protocol type comes in or goes out.
<p>.
<p>
Part of module/icmp.c:
<pre>
[...];
}
[...]
</pre>
Then, in order to replace icmp_input when the module is loaded:
<pre>);
}
</pre>.
<h3><a name="Common-Module"></a>3.6. Hiding the Module</h3>
Hiding all these things would still be rather obvious if the module itself would
be visible. Therefore one would want to hide its existence as well.
<p>:
<pre>);
</pre>
The next thing one needs to do, is remove the modules from the module list.
In the case of CY this is only one. Similar to the linker_file the module
list also keeps a counter, nextid, one should decrement.
<pre>
extern modulelist_t modules;
extern int nextid;
[...]
module_t mod = 0;
TAILQ_FOREACH(mod, &modules, link) {
if(!strcmp(mod->name, "cy")) {
/*first let's patch the internal ID counter*/
nextid--;
TAILQ_REMOVE(&modules, mod, link);
}
}
[...]
</pre>.
<h3><a name="Common-Other"></a>3.6. Other Applications</h3>
Of course there's lots of other things you can do with kernel modules. Some
example include tty hijacking, hiding the promiscuous mode of an interface
and su'ing via a special system call that will set the process' user id to 0.
The kernel patch described below is a bit similar to that, except that it
changes suser so for many applications this becomes unnecessary. Hiding
the promiscuous mode of an interface could also be done by clearing this
flag for the interface in question by writing to /dev/kmem. However,
in this case it will be cleared even if someone else is running tcpdump
etc on the same interface.
<p>
It should also be possible to filter reads and writes. This would
become particularly interesting in the case of /dev/kmem through which
a lot of information can be obtained.
<h3><a name="Patching"></a>4. Patching the Kernel</h3>
Kernel modules are not the only way to alter the workings of the kernel. You can also
overwrite existing code and data via /dev/kmem. This opens various possibilities.
<p>
In the techniques section I've already outlined some of the basic ways on how to work
with /dev/kmem. This section concentrates on writing to /dev/kmem.
<h4><a name="Patching-Example"></a>4.1. Introduction</h4>
A simple thing you can do in order to test writing is to insert a return at the beginning
of an existing kernel function. One way to test this without disrupting any of your normal
work is to load a kernel module, say CY not in stealth mode, and then write a return at
the beginning of for example cy_ctl and use the cyctl to send command to CY. Nothing
will happen, as cy_ctl will return right away. Check out tools/putreturn.c for the code
on how to do this.
<h4><a name="Patching-Jumps"></a>4.2. Inserting Jumps</h4>
A very similar thing is to write a jump to a certain function. This for example allows
you to redirect existing calls to your own without altering the syscall table or any
other tables involved.
.
<p>
In the tools section, there's a file called tools/putjump.c:
<pre>
/*);
}
</pre>
Data can be written to other places accordingly.
<h4><a name="Patching-Replace"></a>4.3. Replacing Kernel Code</h4>
Although you could avoid altering existing tables using the jump method, you still
had to load your own code. Sometimes it might be nicer to just patch your code
into the existing function. This is not always easy though and makes your patch
highly version and compiler dependent. Nevertheless it's kind of fun :)
<p>:
<pre>
# objdump -d /kernel --start-address=0xc019d538 | more
/kernel: file format elf32-i386
Disassembly of section .text:
c019d538 <suser_xxx>: <suser_xxx+0x2d>
c019d545: 85 d2 test %edx,%edx
c019d547: 75 13 jne c019d55c <suser_xxx+0x24>
c019d549: 68 90 df 36 c0 push $0xc036df90
c019d54e: e8 5d db 00 00 call c01ab0b0 <printf>
c019d553: b8 01 00 00 00 mov $0x1,%eax
c019d558: eb 32 jmp c019d58c <suser_xxx+0x54>
c019d55a: 89 f6 mov %esi,%esi
c019d55c: 85 c0 test %eax,%eax
c019d55e: 75 05 jne c019d565 <suser_xxx+0x2d> <suser_xxx+0x1b>
c019d56b: 85 d2 test %edx,%edx
c019d56d: 74 1b je c019d58a <suser_xxx+0x52>
c019d56f: 83 ba 60 01 00 00 00 cmpl $0x0,0x160(%edx)
c019d576: 74 07 je c019d57f <suser_xxx+0x47>
c019d578: 8b 45 10 mov 0x10(%ebp),%eax
c019d57b: a8 01 test $0x1,%al
c019d57d: 74 d4 je c019d553 <suser_xxx+0x1b>
c019d57f: 85 d2 test %edx,%edx
c019d581: 74 07 je c019d58a <suser_xxx+0x52>
</pre>
This is what suser_xxx has been compiled to. You can compare this with the original code for suser_xxx, which is
defined in /sys/kern/kern_prot.c:
<pre>);
}
</pre>
Unless you're the total assembler person (unlike me :) ), you have to look at this for a bit.
You can infer that %eax contains the cred and %edx the proc stuff. Basically what we would
want now is something like this:
<pre>
if ((cred->cr_uid != 0) || (cred->cr_uid != MAGIC_UID))
return (EPERM);
</pre>:
<pre>
c019d565: 83 78 04 00 cmpl $0x0,0x4(%eax)
c019d569: 75 e8 jne c019d553 <suser_xxx+0x1b>
</pre>
Lets first change this to jump to go to the place where we'll add our new code. This is where
the start of the printf stuff is located:
<pre>
c019d549: 68 90 df 36 c0 push $0xc036df90
c019d54e: e8 5d db 00 00 call c01ab0b0 <printf>
</pre>
For this we need to change the address the jump will go to. 0x75 specified
jne and 0xe8 above is the jump address.
<p>
</pre>.
<p>
Ok, now let's put this all together. This will add the extra check for user
with uid 100 (me on my laptop :) ) as described above:
<pre>
#include <stdio.h>
#include <fcntl.h>
#include <kvm.h>
#include <nlist.h>
#include <limits.h>
);
}
</pre>
In direct/fix_suser_xxx.c you will find a slightly altered version, which will
ask you for your user id (< 256 ;)) and find out the location of fix_suser_xxx
itself.
<p>
After you made these changes you can quickly test it, with your new superuser by copying
/sbin/ping to your own directory and executing it as the user.
<h3><a name="Reboot"></a>5. Reboot Proofing</h3>
In the case of the module, you can use file redirection to make your module reboot
proof. You could for example have the startup stuff for your module in a file
in /usr/local/etc/rc.d/ where it will be executed on startup (before the secure level is
raised) and then hide that file after you're loaded.
<p>
<h3><a name="Experimental"></a>6. Experimental</h3>
In some of the examples above, the address of a symbol is retrieved from /dev/kmem,
but where does this actually come from ? This data is also kept in the kernel, and
is thus also subject to alteration. The symbols are in an elf hash table. Every linked
file comes with its own symbols. The example in exp/symtable.c looks at the
first entry in the linker_files list, namely the original kernel. The symbol
name is hashed and then retrieved. Once it's found, the new address can be set.
<p>
This is an excerpt from exp/symtable.c:
<pre>);
}
</pre>
The symtable module is a seperate module which will load the system call above. You can
test it using the set_sym utility. This defeats the checks made by tools/checkcall.
<p>
This table is also consulted when new stuff is linked into the kernel, so you might
want to play with this :) Unfortunately I didn't get around to it yet.
<h3><a name="Defense"></a>7. Defending yourself: The cat and mouse game</h3>
Now you might ask yourself, what you can do in order to prevent these things
from happening to your system. Or perhaps you're also just interested in
finding yourself again :)
<p>
Let's look at some of the methods that could be used to detect such a module.
<h3><a name="Defense-Symbol"></a>7.1. Checking the symbol table</h3>
In many of the examples above, the symbol table has been altered. So you can
check the symbol table for modifications. One way to do this is to load
a module at startup that will make a copy of the syscall table as it is then.
You can have it include a syscall which will allow you to compare the current
syscall table with the saved copy later on.
<p>.
<p>.
<p>:
</pre>
However, we can fix this using setsym. For this you first need to load the module
contained in the experimental section exp/.
<pre>
# exp/setsym 0xc0cd5bf4 open
</pre>
Now.
<p>
The problem with this approach is that someone could use file redirection to point
you to a different /kernel or objdump. However it would be quite a bit of hassle
to cover this up.
<h3><a name="Defense-Trap"></a>7.2. Building a Trap Module</h3>
Another thing you can do is to enter a trap module which will catch calls
made to kldload. You can then log the fact that a module has been loaded or
simply deny any further loading. There's a small example in trapmod/.
Ideally you load this module in stealth mode, when your system is started
and before you raise the secure level.
<p>
Note however that the defensive methods outlined in 7.1. can also be used
against your trapmodule :)
<h3><a name="Defense-Direct"></a>7.3. Retrieving data directly</h3>
Recall that many of the hiding functions above just altered the functions
you can use to get a view of the system. One way to defend yourself against
this, is to provide your own access to this data. This can be done either
by loading your own kernel module or by reading the data from /dev/kmem.
<p>.
<p>
The problem with this approach is however that an attacker that knows
of your modules, can circumvent them.
<p>.
<h3><a name="Defense-Remarks"></a>7.4. Remarks</h3>
As you can see defending yourself turns into a kind of cat and mouse game.
If you know what the attacker is doing, you can devise a way to defend
yourself and detect the module. In return a knowledgeable attacker can most likely
circumvent your defenses, if he/she finds out they are there and how they work. Of course
you can then try and work around that as well...
This can basically go on almost forever, until you've both wasted your life
creating more and more obscure kernels :)
<h3><a name="Conclusion"></a>8. Conclusion</h3>
Many of the techniques used to attack a system can be used for defense as well.
Also employing such modules to hide administrative tools can be useful. For a
sysadmin, it would be possible to hide a shell and the files
that are used to monitor an intruder on the system.
<p>).
<p>
Playing with this kind of stuff allows you to learn more about how the kernel works.
And, most importantly, it can be fun :)
<h3><a name="Code"></a>9. Code</h3>
All code for this article and some more tools and examples is collected in a package
called <a href="cyellow-0.01.tar.gz">Curious Yellow</a> See the README for a roadmap.
<h3><a name="References"></a>10. References</h3>
<h4>FreeBSD</h4>
<ul>
<li><a href="">Exploiting Kernel buffer overflows FreeBSD Style</a> by Esa Etelavuori
<li><a href="">Attacking FreeBSD with Kernel Modules - The System Call Approach</a> by pragmatic/THC
<li><a href="">Dynamic Kernel Linker (KLD) Facility Programming Tutorial</a> by Andrew Reiter
</ul>
<h4>Linux</h4>
<ul>
<li><a href="">Runtime Kernel Kmem Patching</a> by Silvio Cesare
</ul>
<h4>Inspiriation :)</h4>
<ul>
<li>Jeff Noon, "The Vurt"
</ul>
<h3><a name="Thanks"></a>11. Thanks</h3>
Thanks go to:
<pre>
Job de Haas for getting me interested in this whole stuff
Olaf Erb for checking the article for readability :)
and especially Alex Le Heux
</pre>
</body>
|
https://packetstormsecurity.com/files/25297/fbsdfun.htm
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.