text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Fast word count function in Vim I am trying to display a live word count in the vim statusline. I do this by setting my status line in my .vimrc and inserting a function into it. The idea of this function is to return the number of words in the current buffer. This number is then displayed on the status line. This should work nicely as the statusline is updated at just about every possible opportunity so the count will always remain 'live'.
The problem is that the function I have currently defined is slow and so vim is obviously sluggish when it is used for all but the smallest files; due to this function being executed so frequently.
In summary, does anyone have a clever trick for producing a function that is blazingly fast at calculating the number of words in the current buffer and returning the result?
A: Keep a count for the current line and a separate count for the rest of the buffer. As you type (or delete) words on the current line, update only that count, but display the sum of the current line count and the rest of the buffer count.
When you change lines, add the current line count to the buffer count, count the words in the current line and a) set the current line count and b) subtract it from the buffer count.
It would also be wise to recount the buffer periodically (note that you don't have to count the whole buffer at once, since you know where editing is occurring).
A: This will recalculate the number of words whenever you stop typing for a while (specifically, updatetime ms).
let g:word_count="<unknown>"
fun! WordCount()
return g:word_count
endfun
fun! UpdateWordCount()
let s = system("wc -w ".expand("%p"))
let parts = split(s, ' ')
if len(parts) > 1
let g:word_count = parts[0]
endif
endfun
augroup WordCounter
au! CursorHold * call UpdateWordCount()
au! CursorHoldI * call UpdateWordCount()
augroup END
" how eager are you? (default is 4000 ms)
set updatetime=500
" modify as you please...
set statusline=%{WordCount()}\ words
Enjoy!
A: So I've written:
func CountWords()
exe "normal g\"
let words = substitute(v:statusmsg, "^.*Word [^ ]* of ", "", "")
let words = substitute(words, ";.*", "", "")
return words
endfunc
But it prints out info to the statusbar, so I don't think it will be suitable for your use-case. It's very fast, though!
A: Since vim version 7.4.1042
Since vim version 7.4.1042, one can simply alter the statusline as follows:
set statusline+=%{wordcount().words}\ words
set laststatus=2 " enables the statusline.
Word count in vim-airline
Word count is provided standard by vim-airline for a number of file types, being at the time of writing:
asciidoc, help, mail, markdown, org, rst, tex ,text
If word count is not shown in the vim-airline, more often this is due to an unrecognised file type. For example, at least for now, the compound file type markdown.pandoc is not being recognised by vim-airline for word count. This can easily be remedied by amending the .vimrc as follows:
let g:airline#extensions#wordcount#filetypes = '\vasciidoc|help|mail|markdown|markdown.pandoc|org|rst|tex|text'
set laststatus=2 " enables vim-airline.
The \v statement overrides the default g:airline#extensions#wordcount#filetypes variable. The last line ensures vim-airline is enabled.
In case of doubt, the &filetype of an opened file is returned upon issuing the following command:
:echo &filetype
Here is a meta-example:
A: I really like Michael Dunn's answer above but I found that when I was editing it was causing me to be unable to access the last column. So I have a minor change for the function:
function! WordCount()
let s:old_status = v:statusmsg
let position = getpos(".")
exe ":silent normal g\<c-g>"
let stat = v:statusmsg
let s:word_count = 0
if stat != '--No lines in buffer--'
let s:word_count = str2nr(split(v:statusmsg)[11])
let v:statusmsg = s:old_status
end
call setpos('.', position)
return s:word_count
endfunction
I've included it in my status line without any issues:
:set statusline=wc:%{WordCount()}
A: Here's a usable version of Rodrigo Queiro's idea. It doesn't change the status bar, and it restores the statusmsg variable.
function WordCount()
let s:old_status = v:statusmsg
exe "silent normal g\<c-g>"
let s:word_count = str2nr(split(v:statusmsg)[11])
let v:statusmsg = s:old_status
return s:word_count
endfunction
This seems to be fast enough to include directly in the status line, e.g.:
:set statusline=wc:%{WordCount()}
A: I used a slightly different approach for this. Rather than make sure the word count function is especially fast, I only call it when the cursor stops moving. These commands will do it:
:au CursorHold * exe "normal g\<c-g>"
:au CursorHoldI * exe "normal g\<c-g>"
Perhaps not quite what the questioner wanted, but much simpler than some of the answers here, and good enough for my use-case (glance down to see word count after typing a sentence or two).
Setting updatetime to a smaller value also helps here:
set updatetime=300
There isn't a huge overhead polling for the word count because CursorHold and CursorHoldI only fire once when the cursor stops moving, not every updatetime ms.
A: Here is a refinement of Abslom Daak's answer that also works in visual mode.
function! WordCount()
let s:old_status = v:statusmsg
let position = getpos(".")
exe ":silent normal g\<c-g>"
let stat = v:statusmsg
let s:word_count = 0
if stat != '--No lines in buffer--'
if stat =~ "^Selected"
let s:word_count = str2nr(split(v:statusmsg)[5])
else
let s:word_count = str2nr(split(v:statusmsg)[11])
end
let v:statusmsg = s:old_status
end
call setpos('.', position)
return s:word_count
endfunction
Included in the status line as before. Here is a right-aligned status line:
set statusline=%=%{WordCount()}\ words\
A: I took the bulk of this from the vim help pages on writing functions.
function! WordCount()
let lnum = 1
let n = 0
while lnum <= line('$')
let n = n + len(split(getline(lnum)))
let lnum = lnum + 1
endwhile
return n
endfunction
Of course, like the others, you'll need to:
:set statusline=wc:%{WordCount()}
I'm sure this can be cleaned up by somebody to make it more vimmy (s:n instead of just n?), but I believe the basic functionality is there.
Edit:
Looking at this again, I really like Mikael Jansson's solution. I don't like shelling out to wc (not portable and perhaps slow). If we replace his UpdateWordCount function with the code I have above (renaming my function to UpdateWordCount), then I think we have a better solution.
A: My suggestion:
function! UpdateWordCount()
let b:word_count = eval(join(map(getline("1", "$"), "len(split(v:val, '\\s\\+'))"), "+"))
endfunction
augroup UpdateWordCount
au!
autocmd BufRead,BufNewFile,BufEnter,CursorHold,CursorHoldI,InsertEnter,InsertLeave * call UpdateWordCount()
augroup END
let &statusline='wc:%{get(b:, "word_count", 0)}'
I'm not sure how this compares in speed to some of the other solutions, but it's certainly a lot simpler than most.
A: I'm new to Vim scripting, but I might suggest
function WordCount()
redir => l:status
exe "silent normal g\<c-g>"
redir END
return str2nr(split(l:status)[11])
endfunction
as being a bit cleaner since it does not overwrite the existing status line.
My reason for posting is to point out that this function has a puzzling bug: namely, it breaks the append command. Hitting A should drop you into insert mode with the cursor positioned to the right of the final character on the line. However, with this custom status bar enabled it will put you to the left of the final character.
Anyone have any idea what causes this?
A: This is an improvement on Michael Dunn's version, caching the word count so even less processing is needed.
function! WC()
if &modified || !exists("b:wordcount")
let l:old_status = v:statusmsg
execute "silent normal g\<c-g>"
let b:wordcount = str2nr(split(v:statusmsg)[11])
let v:statusmsg = l:old_status
return b:wordcount
else
return b:wordcount
endif
endfunction
A: Since vim now supports this natively:
:echo wordcount().words
A: Using the method in the answer provided by Steve Moyer I was able to produce the following solution. It is a rather inelegant hack I'm afraid and I feel that there must be a neater solution, but it works, and is much faster than simply counting all of the words in a buffer every time the status line is updated. I should note also that this solution is platform independent and does not assume a system has 'wc' or something similar.
My solution does not periodically update the buffer, but the answer provided by Mikael Jansson would be able to provide this functionality. I have not, as of yet, found an instance where my solution becomes out of sync. However I have only tested this briefly as an accurate live word count is not essential to my needs. The pattern I use for matching words is also simple and is intended for simple text documents. If anyone has a better idea for a pattern or any other suggestions please feel free to post an answer or edit this post.
My solution:
"returns the count of how many words are in the entire file excluding the current line
"updates the buffer variable Global_Word_Count to reflect this
fu! OtherLineWordCount()
let data = []
"get lines above and below current line unless current line is first or last
if line(".") > 1
let data = getline(1, line(".")-1)
endif
if line(".") < line("$")
let data = data + getline(line(".")+1, "$")
endif
let count_words = 0
let pattern = "\\<\\(\\w\\|-\\|'\\)\\+\\>"
for str in data
let count_words = count_words + NumPatternsInString(str, pattern)
endfor
let b:Global_Word_Count = count_words
return count_words
endf
"returns the word count for the current line
"updates the buffer variable Current_Line_Number
"updates the buffer variable Current_Line_Word_Count
fu! CurrentLineWordCount()
if b:Current_Line_Number != line(".") "if the line number has changed then add old count
let b:Global_Word_Count = b:Global_Word_Count + b:Current_Line_Word_Count
endif
"calculate number of words on current line
let line = getline(".")
let pattern = "\\<\\(\\w\\|-\\|'\\)\\+\\>"
let count_words = NumPatternsInString(line, pattern)
let b:Current_Line_Word_Count = count_words "update buffer variable with current line count
if b:Current_Line_Number != line(".") "if the line number has changed then subtract current line count
let b:Global_Word_Count = b:Global_Word_Count - b:Current_Line_Word_Count
endif
let b:Current_Line_Number = line(".") "update buffer variable with current line number
return count_words
endf
"returns the word count for the entire file using variables defined in other procedures
"this is the function that is called repeatedly and controls the other word
"count functions.
fu! WordCount()
if exists("b:Global_Word_Count") == 0
let b:Global_Word_Count = 0
let b:Current_Line_Word_Count = 0
let b:Current_Line_Number = line(".")
call OtherLineWordCount()
endif
call CurrentLineWordCount()
return b:Global_Word_Count + b:Current_Line_Word_Count
endf
"returns the number of patterns found in a string
fu! NumPatternsInString(str, pat)
let i = 0
let num = -1
while i != -1
let num = num + 1
let i = matchend(a:str, a:pat, i)
endwhile
return num
endf
This is then added to the status line by:
:set statusline=wc:%{WordCount()}
I hope this helps anyone looking for a live word count in Vim. Albeit one that isn't always exact. Alternatively of course g ctrl-g will provide you with Vim's word count!
A: In case someone else is coming here from Google, I modified Abslom Daak's answer to work with Airline. I saved the following as
~/.vim/bundle/vim-airline/autoload/airline/extensions/pandoc.vim
and added
call airline#extensions#pandoc#init(s:ext)
to extensions.vim
let s:spc = g:airline_symbols.space
function! airline#extensions#pandoc#word_count()
if mode() == "s"
return 0
else
let s:old_status = v:statusmsg
let position = getpos(".")
let s:word_count = 0
exe ":silent normal g\<c-g>"
let stat = v:statusmsg
let s:word_count = 0
if stat != '--No lines in buffer--'
let s:word_count = str2nr(split(v:statusmsg)[11])
let v:statusmsg = s:old_status
end
call setpos('.', position)
return s:word_count
end
endfunction
function! airline#extensions#pandoc#apply(...)
if &ft == "pandoc"
let w:airline_section_x = "%{airline#extensions#pandoc#word_count()} Words"
endif
endfunction
function! airline#extensions#pandoc#init(ext)
call a:ext.add_statusline_func('airline#extensions#pandoc#apply')
endfunction
A: A variation of Guy Gur-Ari's refinement that
*
*only counts words if spell checking is enabled,
*counts the number of selected words in visual mode
*keeps mute outside of insert and normal mode, and
*hopefully is more agnostic to the system language (when different from english)
function! StatuslineWordCount()
if !&l:spell
return ''
endif
if empty(getline(line('$')))
return ''
endif
let mode = mode()
if !(mode ==# 'v' || mode ==# 'V' || mode ==# "\<c-v>" || mode =~# '[ni]')
return ''
endif
let s:old_status = v:statusmsg
let position = getpos('.')
let stat = v:statusmsg
let s:word_count = 0
exe ":silent normal g\<c-g>"
try
if mode ==# 'v' || mode ==# 'V'
let s:word_count = split(split(v:statusmsg, ';')[1])[0]
elseif mode ==# "\<c-v>"
let s:word_count = split(split(v:statusmsg, ';')[2])[0]
elseif mode =~# '[ni]'
let s:word_count = split(split(v:statusmsg, ';')[2])[3]
end
" index out of range
catch /^Vim\%((\a\+)\)\=:E\%(684\|116\)/
return ''
endtry
let v:statusmsg = s:old_status
call setpos('.', position)
return "\ \|\ " . s:word_count . 'w'
endfunction
that can be appended to the statusline by, say,
set statusline+=%.10{StatuslineWordCount()} " wordcount
A: Building upon https://stackoverflow.com/a/60310471/11001018, my suggestion is:
"new in vim 7.4.1042
let g:word_count=wordcount().words
function WordCount()
if has_key(wordcount(),'visual_words')
let g:word_count=wordcount().visual_words."/".wordcount().words
else
let g:word_count=wordcount().cursor_words."/".wordcount().words
endif
return g:word_count
endfunction
And then:
set statusline+=\ w:%{WordCount()},
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: Consequences of running a Java Class file on different JREs? What are the consequences of running a Java class file compiled in JDK 1.4.2 on JRE 1.6 or 1.5?
A: The Java SE 6 Compatibility page lists the compatibility of Jave SE 6 to Java SE 5.0. Furthermore, there is a link to Incompatibilities in J2SE 5.0 (since 1.4.2) as well. By looking at the two documents, it should be possible to find out whether there are any incomapatibilities of programs written under JDK 1.4.2 and Java SE 6.
In terms of the binary compatibility of the Java class files, the Java SE 6 Compatibility page has the following to say:
Java SE 6 is upwards binary-compatible
with J2SE 5.0 except for the
incompatibilities listed below. Except
for the noted incompatibilities, class
files built with version 5.0 compilers
will run correctly in JDK 6.
So, in general, as workmad3 noted, Java class files compiled on a older JDK will still be compatible with the newest version. Furthermore, as noted by Desty, any changes to the API are generally deprecated rather than removed.
From the Source Compatibilities section:
Deprecated APIs are interfaces that
are supported only for backwards
compatibility. The javac compiler
generates a warning message whenever
one of these is used, unless the
-nowarn command-line option is used. It is recommended that programs be
modified to eliminate the use of
deprecated APIs, although there are no
current plans to remove such APIs
entirely from the system with the
exception of JVMDI and JVMPI.
There is a long listing of performance improvements in the Java SE 6 Performance White Paper.
A: Java classes are forward compatible , e.g. classes generated using 1.5 compiler will be loaded and executed successfully without any problems on JRE 1.6. Generally your classes genereated by today java compilers will be compatible with future JREs (for example Java7)
The inverse does not hold : you can not run classes generated by 1.6 on older JREs (1.3, 1.4, etc).
A: Java compilers specify source and target compliance levels. This way, you can compile for any JRE from any other higher-versioned JRE. You need to make sure to use these compliance levels because there are API differences between JREs. For example, JRE 1.5 introduced StringBuilder at the compiler level. This means any time you do:
String s = "string1" + "string2";
The compiler changes it to:
String s = new StringBuilder("string1").append("string2").toString();
Obviously, this will break with a NoClassDefFoundError when you attempt to construct the StringBuilder.
A: Theoretically, nothing. The JVM is supposedly backwards compatible. Myself, I've never had a problem in that direction.
A: Depends entirely on what parts of the java library you are using. It could be anything from 'absolutely fine, no difference whatsoever' to 'OMG!! WHY HAS IT JUST FORMATTED MY HARD DRIVE??' (Well, perhaps not this second one, but it serves to support the point of it going from nothing to possibly bad :)).
Your class could also pick up on bug fixes in the library as well, which would mean niggling bugs disappear (or could be introduced depending on if you were relying on buggy behaviour or not).
AFAIK though, the java bytecode is backwards compatible so you shouldn't get any issues with it just not doing anything.
A: One positive consequence is that the 1.4 classes will still take advantage of speed improvements made to the JVM (although not necesarily improvements made to library classes).
A: just ran into a problem like this myself. I was writing code that should work with 1.6 but the college had 1.3 installed. Lots of methods just don't work i.e
input = ""+ JOptionPane.showInputDialog(null,"Enter a four digit number to " + (b?"encrypt":"decrypt")+".",(b?"4086":"5317"));
wouldn't work but
input = ""+ JOptionPane.showInputDialog(null,"Enter a four digit number to " + (b?"encrypt":"decrypt")+".");
would. the inputdialog method that accepts three agruments doesn't seam to exist in 1.3.
this is just a long winded way of saying working with 1.6 api on 1.3 results in head slamming incidents.
A: It should work. I don't remember encountering any problems with it, except when parts of the Java API are deprecated, in which case it'll explain what they are anyway and you can hopefully write a workaround.
Of course, running a class file compiled with JDK 1.6 in JRE 1.5 would cause a problem - even a JRE only minor build revisions older will throw an error.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Upgrade MSDE to SQL Server 2008 I am trying to upgrade an named instance of MSDE to SQL Server 2008 Express. When I get to the "Select Features" page of the 2008 installer there are no instances listed. The upgrade to SQL Server 2005 Express (on the same VM image) however works fine.
It seems to be a supported scenario (http://msdn.microsoft.com/en-us/library/ms143393.aspx), yet I am finding that it does not work. Has anyone successfully done this?
A: It looks to be supported: http://msdn.microsoft.com/en-us/library/ms143393.aspx
There are also comments you might find useful.
A: I just had the same problem, so I'll post my solution for anyone that happens upon this thread;
You are probably seeing your named instance on that screen but it is greyed out.
Check that you have SP4 for MSDE, which is version 8.0.2039.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do you create a custom attribute for MS Test? If you want to create a custom attribute for MS test (say [Repeat(3)]
how would you do that?
A: I don't think you will like the answer: there is no supported way. However, there is a codeplex project MSTestExtensions implementing a work around and a blog post about how MSTestExtensions works. (Using ContextBoundObject)
A: Roy, If you are using Typemock Isolator, you can use the AOP Decorator to extend any test framework.
A: looking more, here is one possible place to start:
Peli's blog
I'm still looking for other good resources or examples.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Determine the number of rows in a range I know the range name of the start of a list - 1 column wide and x rows deep.
How do I calculate x?
There is more data in the column than just this list. However, this list is contiguous - there is nothing in any of the cells above or below or either side beside it.
A: Why not use an Excel formula to determine the rows? For instance, if you are looking for how many cells contain data in Column A use this:
=COUNTIFS(A:A,"<>")
You can replace <> with any value to get how many rows have that value in it.
=COUNTIFS(A:A,"2008")
This can be used for finding filled cells in a row too.
A: Sheet1.Range("myrange").Rows.Count
A: You can also use:
Range( RangeName ).end(xlDown).row
to find the last row with data in it starting at your named range.
A: I am sure that you probably wanted the answer that @GSerg gave. There is also a worksheet function called rows that will give you the number of rows.
So, if you have a named data range called Data that has 7 rows, then =ROWS(Data) will show 7 in that cell.
A: Function ListRowCount(ByVal FirstCellName as String) as Long
With thisworkbook.Names(FirstCellName).RefersToRange
If isempty(.Offset(1,0).value) Then
ListRowCount = 1
Else
ListRowCount = .End(xlDown).row - .row + 1
End If
End With
End Function
But if you are damn sure there's nothing around the list, then just thisworkbook.Names(FirstCellName).RefersToRange.CurrentRegion.rows.count
A: That single last line worked perfectly @GSerg.
The other function was what I had been working on but I don't like having to resort to UDF's unless absolutely necessary.
I had been trying a combination of excel and vba and had got this to work - but its clunky compared with your answer.
strArea = Sheets("Oper St Report CC").Range("cc_rev").CurrentRegion.Address
cc_rev_rows = "=ROWS(" & strArea & ")"
Range("cc_rev_count").Formula = cc_rev_rows
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: How to set the background-position to an absolute distance, starting from right? I want to set a background image for a div, in a way that it is in the upper RIGHT of the div, but with a fixed 10px distance from top and right.
Here is how I would do that if wanted it in the upper LEFT of the div:
background: url(images/img06.gif) no-repeat 10px 10px;
Is there anyway to achieve the same result, but showing the background on the upper RIGHT?
A: I don't know if it is possible in pure css, so you can try
background: url(images/img06.gif) no-repeat top right;
and modify your image to incorporate a 10px border on the top and right in a transparent color
A: In all modern browsers and IE down even to version 9 you can use a four-value syntax, specified in CSS3:
background-position: right 10px top 10px;
Source: MDN
A: Use the previously mentioned rule along with a top and right margin:
background: url(images/img06.gif) no-repeat top right;
margin-top: 10px;
margin-right: 10px;
Background images only appear within padding, not margins. If adding the margin isn't an option you may have to resort to another div, although I'd recommend you only use that as a last resort to try and keep your markup as lean and sementic as possible.
A: There are a few ways you can do this.
*
*Do the math yourself, if possible. You already know the dimensions of your image. If you know the dimensions of the div, you can just put the image at (div width - image width - 10, div height - image height - 10).
*Use Javascript to do the heavy lifting for you. Pretty much the same method as above, except you don't need to know the dimensions of the div itself. Javascript can tell you.
*A more hackish way would be to put a 10px transparent border around the top and right of your image, and set the position to top right.
A: You can use percentages:
background: url(...) top 98% no-repeat;
If you know the width of the parent div it should be pretty easy to determine what percentage you need to use.
A: One solution is to absolutely position an empty div, and give that the background. I don't believe there's a way to do it purely with CSS, no changes to the image, and no extra markup in a fluid layout.
A: You can fake the space on the right hand side with a border in pixels (white most of the time or maybe something else)
background-image: url(../images/calender.svg) center right
border-right: 5px white solid
A: The correct format is:
background: url(YourUrl) 0px -50px no-repeat;
Where 0px is the horizontal position and -50px is the vertical position.
CSS background-position accepts negative values.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: Is it possible to send a collection of ID's as a ADO.NET SQL parameter? Eg. can I write something like this code:
public void InactiveCustomers(IEnumerable<Guid> customerIDs)
{
//...
myAdoCommand.CommandText =
"UPDATE Customer SET Active = 0 WHERE CustomerID in (@CustomerIDs)";
myAdoCommand.Parameters["@CustomerIDs"].Value = customerIDs;
//...
}
The only way I know is to Join my IEnumerable and then use string concatenation to build my SQL string.
A: You can with SQL 2008. It hasn't been out very long, but it is available.
A: As was mentioned in a comment, Erland Sommarskog wrote a series of articles on this topic (linked-to below). The articles are very thorough and can serve as reference material. While they are specific to SQL Server (T-SQL), some of the techniques mentioned might also work for other RDBMS (such as using an XML data type):
*
*Arrays and Lists in SQL Server 2008 Using Table-Valued Parameters:
*
*Table-Valued Parameters (TVPs)
*Arrays and Lists in SQL Server 2005 and Beyond When TVPs Do Not Cut it:
*
*string serialization and de-serialization of scalar values
*SQLCLR
*passing structured list data via the XML data type
*Dynamic SQL
*Arrays and Lists in SQL Server 2000 and Earlier
A: Generally the way that you do this is to pass in a comma-separated list of values, and within your stored procedure, parse the list out and insert it into a temp table, which you can then use for joins. As of Sql Server 2005, this is standard practice for dealing with parameters that need to hold arrays.
Here's a good article on various ways to deal with this problem:
Passing a list/array to an SQL Server stored procedure
But for Sql Server 2008, we finally get to pass table variables into procedures, by first defining the table as a custom type.
There is a good description of this (and more 2008 features) in this article:
Introduction to New T-SQL Programmability Features in SQL Server 2008
A: You can use xml parameter type:
CREATE PROCEDURE SelectByIdList(@productIds xml) AS
DECLARE @Products TABLE (ID int)
INSERT INTO @Products (ID) SELECT ParamValues.ID.value('.','VARCHAR(20)')
FROM @productIds.nodes('/Products/id') as ParamValues(ID)
SELECT * FROM
Products
INNER JOIN
@Products p
ON Products.ProductID = p.ID
http://weblogs.asp.net/jgalloway/archive/2007/02/16/passing-lists-to-sql-server-2005-with-xml-parameters.aspx
A: Nope. Parameters are like SQL values in obeying first normal form, basically, there can only be one...
As you are probably aware, generating SQL strings is risky business: you leave yourself open to an SQL injection attack. As long as you're dealing with bona fide GUID's you should be fine, but otherwise you need to be sure to cleanse your input.
A: You cannot pass a list as a single SQl Parameter. You could string.Join(',') the GUIDS such as "0000-0000-0000-0000, 1111-1111-1111-1111" but this would be high on database overhead and sub-optimal really. And you have to pass the whole string as single concatenated dynamic statement, you can't add it as a parameter.
Question:
Where are you getting your list of ID's that represent inactive customers from?
My suggestion is to approach the problem a little differently. Move all that logic into the database, something like:
Create procedure usp_DeactivateCustomers
@inactive varchar(50) /*or whatever values are required to identify inactive customers*/
AS
UPDATE Customer SET c.Active = 0
FROM Customer c JOIN tableB b ON c.CustomerID = b.CustomerID
WHERE b.someField = @inactive
And call it as a stored procedure:
public void InactiveCustomers(string inactive)
{
//...
myAdoCommand.CommandText =
"usp_DeactivateCustomers";
myAdoCommand.Parameters["@inactive"].Value = inactive;
//...
}
If a list of GUID's exist in a database, why do I need to: find them; put them in a generic list; unwind the list into a CSV/XML/Table variable, just to present them back to the DB again ????? They're already there! Am I missing something?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: Hide a gridView row in asp.net I am creating a gridView that allows adding new rows by adding the controls necessary for the insert into the FooterTemplate, but when the ObjectDataSource has no records, I add a dummy row as the FooterTemplate is only displayed when there is data.
How can I hide this dummy row? I have tried setting e.row.visible = false on RowDataBound but the row is still visible.
A: You could handle the gridview's databound event and hide the dummy row. (Don't forget to assign the event property in the aspx code):
protected void GridView1_DataBound(object sender, EventArgs e)
{
if (GridView1.Rows.Count == 1)
GridView1.Rows[0].Visible = false;
}
A: Please try the following
protected void GridView1_DataBound(object sender, EventArgs e)
{
GridView1.Rows[0].Visible = false;
}
A: I think this is what you need:
<asp:GridView ID="grid" runat="server" AutoGenerateColumns="false" ShowFooter="true" OnRowDataBound="OnRowDataBound">
<Columns>
<asp:TemplateField HeaderText="headertext">
<ItemTemplate>
itemtext
</ItemTemplate>
<FooterTemplate>
insert controls
</FooterTemplate>
</asp:TemplateField>
</Columns>
</asp:GridView>
and the codebehind:
protected void OnRowDataBound(object sender, GridViewRowEventArgs e)
{
if (e.Row.RowType == DataControlRowType.DataRow)
{
e.Row.Attributes["style"] = "display:none";
}
}
But I do not understand why you are adding your "insert controls" to the footer instead of placing them below the grid.
A: This is the incorrect usage of the GridView control. The GridView control has a special InsertRow which is where your controls should go.
A: Maybe try:
e.Row.Height = Unit.Pixel(0);
This isnt the right answer but it might work in the meantime until you get the right answer.
A: Maybe use CSS to set display none?!
A: GridView has a special property to access Footer Row, named "FooterRow"
Then, you cold try yourGrid.FooterRow.Visible = false;
A: I did this on a previous job, but since you can add rows, I always had it visible in the footer row. To make it so that the grid shows up, I bound an empty row of the type that is normally bound
dim row as Datarow = table.NewRow()
table.AddRow(row)
gridView.DataSource = table
gridView.Databind()
then it has all the columns and then you need. You can access the footer by pulling this:
'this will get the footer no matter how many rows there are in the grid.
Dim footer as Control = gridView.Controls(0).Controls(gridView.Controls(0).Controls.Count -1)
then to access any of the controls in the footer you would go and do a:
Dim cntl as Control = footer.FindControl(<Insert Control Name Here>)
I'd assume you'd be able to do a:
footer.Visible = false
to make the footer row invisible.
I hope this helps!
Edit I just figured out what you said. I basically delete the row when I add a new one, but to do this you need to check to see if there are any other rows, and if there are, check to see if there are values in it.
To delete the dummy row do something like this:
If mTable.Rows.Count = 1 AndAlso mTable.Rows(0)(<first column to check for null value>) Is DBNull.Value AndAlso mTable.Rows(0)(<second column>) Is DBNull.Value AndAlso mTable.Rows(0)(<thrid column>) Is DBNull.Value Then
mTable.Rows.Remove(mTable.Rows(0))
End If
mTable.Rows.Add(row)
gridView.Datasource = mTable
gridView.Databind()
A: To make it visible, just use:
Gridview.Rows.Item(i).Attributes.Add("style", "display:block")
And to make it invisible
Gridview.Rows.Item(i).Attributes.Add("style", "display:none")
A: Why are you not using the EmptyDataTemplate? It seems to work great even though I have only been using it for a couple days...
A: You should use DataKeyNames in your GridView:
<asp:GridView ID="GridView1" runat="server" DataKeyNames="FooID">
And then retrieve it on your code:
GridView1.DataKeys[0].Value.ToString()
Where "0" is the number of the row you want to get the "FooID"
A: If you do not want to display data when the column/row is null:
if (!String.IsNullOrEmpty(item.DataName))
{
e.Row.Visible = false;
}
A: It can easily be done by SQL
USE YourdatabaseName select * from TableName where Column_Name <> ''
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: The difference between the two functions? ("function x" vs "var x = function")
Possible Duplicate:
JavaScript: var functionName = function() {} vs function functionName() {}
What's the difference between:
function sum(x, y) {
return x+y;
}
// and
var sum = function (x, y) {
return x+y;
}
Why is one used over the other?
A: The first one is a named function statement, the second one assigns an anonymous function expression to a variable.
The function statement is added to its scope immediately - you don't need to run it before being able to call it, so this works:
var y = sum(1, 2);
function sum(x, y) {
return x + y;
}
But the function expression is only assigned to the variable when the code is executed, so this doesn't work:
// Error here because the function hasn't been assigned to sum yet.
var y = sum(1, 2);
var sum = function(x, y) {
return x + y;
}
An advantage of the expression form is that you can use it to assign different functions to the expression at different points - so you can change the function, or use a different one under different conditions (such as depending on the browser being used).
An advantage of a named function statement, is that debuggers will be able to display the name. Although, you can name function expressions:
var sum = function sum(x, y) {
return x + y;
}
But this can be confusing since the two names are actually in different scopes and refer to different things.
A: The first is known as a named function where the second is known as an anonymous function.
The key practical difference is in when you can use the sum function. For example:-
var z = sum(2, 3);
function sum(x, y) {
return x+y;
}
z is assigned 5 whereas this:-
var z = sum(2, 3);
var sum = function(x, y) {
return x+y;
}
Will fail since at the time the first line has executed the variable sum has not yet been assigned the function.
Named functions are parsed and assigned to their names before execution begins which is why a named function can be utilized in code that precedes its definition.
Variables assigned a function by code can clearly only be used as function once execution has proceeded past the assignment.
A: The two code snippets you've posted there will, for almost all purposes, behave the same way.
However, the difference in behaviour is that with the second variant, that function can only be called after that point in the code.
With the first variant, the function is available to code that runs above where the function is declared.
This is because with the second variant, the function is assigned to the variable foo at run time. In the first, the function is assigned to that identifier foo at parse time.
More technical info
Javascript has three ways of defining functions.
*
*Your first example is a function declaration. This uses the "function" statement to create a function. The function is made available at parse time and can be called anywhere in that scope. You can still store it in a variable or object property later.
*Your second snippet shows a function expression. This involves using the "function" operator to create a function - the result of that operator can be stored in any variable or object property. The function expression is powerful that way. The function expression is often called an "anonymous function" because it does not have to have a name,
*The third way of defining a function is the "Function()" constructor, which is not shown in your original post. It's not recommended to use this as it works the same way as eval(), which has its problems.
A: The first tends to be used for a few reasons:
*
*The name "sum" shows up in the
stacktrace which makes debugging
easier in many browsers.
*The name
"sum" can be used inside the
function body which makes it easier
to use for recursive functions.
*function declarations are "hoisted"
in javascript, so in the first case,
the function is guaranteed to be
defined exactly once.
*Semicolon insertion causes
var f = function (x) { return 4; }
(f)
to assign 4 to f.
There are a few caveats to keep in mind though.
Do not do
var sum = function sum(x, y) { ... };
on IE 6 since it will result in two function objects being created. Especially confusing if you do
var sum = function mySym(x, y) { ... };
According to the standard,
function sum(x, y) { ... }
cannot appear inside an if block or a loop body, so different interpreters will treat
if (0) {
function foo() { return 1; }
} else {
function foo() { return 2; }
}
return foo();
differently.
In this case, you should do
var foo;
if (0) {
foo = function () { return 1; }
} ...
A: The difference is...
This is a nameless function
var sum = function (x, y) {
return x+y;
}
So if you alert(sum); you get "function (x, y) { return x + y; }" (nameless)
While this is a named function:
function sum(x, y) {
return x+y;
}
If you alert(sum); now you get "function sum(x, y) { return x + y; }" (name is sum)
Having named functions help if you are using a profiler because the profiler can tell you function sum's execution time...etcetera instead of an unknown functions's execution time...etcetera
A: here's an other example:
function sayHello(name) { alert('hello' + name) }
now,suppose you want modify onclick event of a button, such as it says "hello world"
you can not write:
yourBtn.onclik = sayHello('world'), because you must provide a function reference.
then you can use second form:
yourBtn.onclick = function() { sayHello('workld'); }
Ps: sorry for my bad english!
A: They mean the exact same thing. It's just syntactic sugar. The latter is IMO more revealing of what JavaScript is really doing; i.e. "sum" is just a variable, initialised with a function object, which can then be replaced by something else:
$ js
js> function sum(x,y) { return x+y; }
js> sum(1,2);
3
js> sum=3
3
js> sum(1,2);
typein:4: TypeError: sum is not a function
js> sum
3
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
}
|
Q: Simplest way to have a configuration file in a Windows Forms C# application I'm really new to .NET, and I still didn't get the hang about how configuration files work.
Every time I search on Google about it I get results about web.config, but I'm writing a Windows Forms application.
I figured out that I need to use the System.Configuration namespace, but the documentation isn't helping.
How do I define that my configuration file is XYZ.xml? Or does it have a "default" name for the configuration file? I still didn't get that.
Also, how do I define a new section? Do I really need to create a class which inherits from ConfigurationSection?
I would like to just have a configuration file with some values like this:
<MyCustomValue>1</MyCustomValue>
<MyCustomPath>C:\Some\Path\Here</MyCustomPath>
Is there a simple way to do it? Can you explain in a simple way how to read and write from/to a simple configuration file?
A: Clarification of previous answers...
*
*Add a new file to your project (Add → New Item → Application Configuration File)
*The new configuration file will appear in Solution Explorer as App.Config.
*Add your settings into this file using the following as a template
<configuration>
<appSettings>
<add key="setting1" value="key"/>
</appSettings>
<connectionStrings>
<add name="prod" connectionString="YourConnectionString"/>
</connectionStrings>
</configuration>
*Retrieve them like this:
private void Form1_Load(object sender, EventArgs e)
{
string setting = ConfigurationManager.AppSettings["setting1"];
string conn = ConfigurationManager.ConnectionStrings["prod"].ConnectionString;
}
*When built, your output folder will contain a file called <assemblyname>.exe.config. This will be a copy of the App.Config file. No further work should need to be done by the developer to create this file.
A: The default name for a configuration file is [yourexe].exe.config. So notepad.exe will have a configuration file named notepad.exe.config, in the same folder as the program. This is a general configuration file for all aspects of the CLR and Framework, but it can contain your own settings under an <appSettings> node.
The <appSettings> element creates a collection of name-value pairs which can be accessed as System.Configuration.ConfigurationSettings.AppSettings. There is no way to save changes back to the configuration file, however.
It is also possible to add your own custom elements to a configuration file - for example, to define a structured setting - by creating a class that implements IConfigurationSectionHandler and adding it to the <configSections> element of the configuration file. You can then access it by calling ConfigurationSettings.GetConfig.
.NET 2.0 adds a new class, System.Configuration.ConfigurationManager, which supports multiple files, with per-user overrides of per-system data. It also supports saving modified configurations back to settings files.
Visual Studio creates a file called App.config, which it copies to the EXE folder, with the correct name, when the project is built.
A: The best (IMHO) article about .NET Application configuration is on CodeProject, Unraveling the Mysteries of .NET 2.0 Configuration. And my next favorite (shorter) article about sections in the .NET configuration files is Understanding Section Handlers - App.config File.
A: In Windows Forms, you have the app.config file, which is very similar to the web.config file. But since what I see you need it for are custom values, I suggest using Settings.
To do that, open your project properties, and then go to settings. If a settings file does not exist you will have a link to create one. Then, you can add the settings to the table you see there, which would generate both the appropriate XML, and a Settings class that can be used to load and save the settings.
The settings class will be named something like DefaultNamespace.Properties.Settings. Then, you can use code similar to:
using DefaultNamespace.Properties;
namespace DefaultNamespace {
class Class {
public int LoadMySettingValue() {
return Settings.Default.MySettingValue;
}
public void SaveMySettingValue(int value) {
Settings.Default.MySettingValue = value;
}
}
}
A: From a quick read of the previous answers, they look correct, but it doesn't look like anyone mentioned the new configuration facilities in Visual Studio 2008. It still uses app.config (copied at compile time to YourAppName.exe.config), but there is a UI widget to set properties and specify their types. Double-click Settings.settings in your project's "Properties" folder.
The best part is that accessing this property from code is typesafe - the compiler will catch obvious mistakes like mistyping the property name. For example, a property called MyConnectionString in app.config would be accessed like:
string s = Properties.Settings.Default.MyConnectionString;
A: I agree with the other answers that point you to app.config. However, rather than reading values directly from app.config, you should create a utility class (AppSettings is the name I use) to read them and expose them as properties. The AppSettings class can be used to aggregate settings from several stores, such as values from app.config and application version info from the assembly (AssemblyVersion and AssemblyFileVersion).
A: A very simple way of doing this is to use your your own custom Settings class.
A: You want to use an App.Config.
When you add a new item to a project there is something called Applications Configuration file. Add that.
Then you add keys in the configuration/appsettings section
Like:
<configuration>
<appSettings>
<add key="MyKey" value="false"/>
Access the members by doing
System.Configuration.ConfigurationSettings.AppSettings["MyKey"];
This works in .NET 2 and above.
A: You should create an App.config file (very similar to web.config).
You should right click on your project, add new item, and choose new "Application Configuration File".
Ensure that you add using System.Configuration in your project.
Then you can add values to it:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="setting1" value="key"/>
</appSettings>
<connectionStrings>
<add name="prod" connectionString="YourConnectionString"/>
</connectionStrings>
</configuration>
private void Form1_Load(object sender, EventArgs e)
{
string setting = ConfigurationManager.AppSettings["setting1"];
string conn = ConfigurationManager.ConnectionStrings["prod"].ConnectionString;
}
Just a note: According to Microsoft, you should use ConfigurationManager instead of ConfigurationSettings (see the remarks section):
"The ConfigurationSettings class provides backward compatibility only. For new applications you should use the ConfigurationManager class or WebConfigurationManager class instead. "
A: Use:
System.Configuration.ConfigurationSettings.AppSettings["MyKey"];
AppSettings has been deprecated and is now considered obsolete
(link).
In addition, the appSettings section of the app.config has been replaced by the applicationSettings section.
As someone else mentioned, you should be using System.Configuration.ConfigurationManager (link) which is new for .NET 2.0.
A: What version of .NET and Visual Studio are you using?
When you created the new project, you should have a file in your solution called app.config. That is the default configuration file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "95"
}
|
Q: Visually designing a database structure I am quite happy to code out tables by hand when making a database but it's not the easiest way to convey information about a database to someone else, especially someone that's not so comfortable coding the tables via a script and would instead use something such at phpMyAdmin.
Is there thus a free program (for me to use it it'll have to work on a Mac but feel free to suggest PC apps for others with the same Q) or script (preferably in PHP or Python) that allows you to design database structure and will then output either a basic diagram or the code as chosen by the user?
A: Well on the PC you can use MS Visio to produce a DB Entity diagram.
It will even reverse engineer one from an existing Database.
A pain to set-up the first time you use it, but quite handy thereafter.
A: Open System Architect has some potential. Its very similar to Visio.
A: I'm a big fan of ARGO UML from Tigris.org. Draws nice pictures using standard UML notation. It does some code generation, but mostly Java classes, which isn't SQL DDL, so that may not be close enough to what you want to do.
You can look at the Data Modeling Tools list and see if anything there is better than Argo UML. Many of the items on this list are free or cheap.
Also, if you're using Eclipse or NetBeans, there are many design plug-ins, some of which may have the features you're looking for.
A: I use the aptly named Database Design Tool. It's extremely simple and unfortunatly it's developed any more, however. It's the best tool I've come across that is free and at the end of designing your tables, it generates the T-SQL for you. It's also language independent.
A: You could try out MySQL Workbench which originates in the open source dbdesigner. There's a free community edition available. You can design the database via er-diagrams or reverse engineer an existing database.
A: MySQL Workbench is the best DB design tool that I've tried
A: I'm currently checking out SQL Power Architect (both w/ PostgreSQL and Mysql - but it also supports other vendors) and it definitely seems promising. Does both forward and backward SQL engineering. The Community Edition is open source and cross platform (Java). You can check it out yourself: http://code.google.com/p/power-architect/
When strictly dealing w/ MySQL so far I've otherwise used MySQL Workbench, http://wb.mysql.com/ which performed reliably.
A: I always have enjoyed Eclipse. There are a few plugins for it that look like they will do what you want.
A: SchemaBank (a web-based SaaS vendor) can turn your ER design into SQL statements for MySQL and PG. Can't do graphics export yet though. The nice thing is you don't need to install anything ('cos its browser-based) and it costs virtually nothing. You should be able to share your design to other people too.
A: SQLDeveloper from Oracle can work with Oracle and MySQL database.
http://www.oracle.com/us/corporate/press/020861
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: How do I access (listen for) the multimedia keys (play/pause) in Mac OS X? I want to write a Songbird extension binds the multimedia keys available on all Apple Mac OS X platforms. Unfortunately this isn't an easy google search and I can't find any docs.
Can anyone point me resources on accessing these keys or tell me how to do it?
I have extensive programming experience, but this will be my first time coding in both MacOSX and XUL (Firefox, etc), so any tips on either are welcome.
Please note that these are not regular key events. I assume it must be a different type of system event that I will need to hook or subscribe to.
A: This blog post has a solution:
http://www.rogueamoeba.com/utm/posts/Article/mediaKeys-2007-09-29-17-00.html
You basically need to subclass NSApplication and override sendEvent,
looking for special scan codes. I don't know what songbird is, but if it's
not a real application then I doubt you'll be able to do this.
Or maybe you can, a simple category may suffice:
@implementation NSApplication(WantMediaKeysCategoryKBye)
- (void)sendEvent: (NSEvent*)event
{
// intercept media keys here
}
@end
A: Are you sure your multimedia keys are working in your installation? Every single key generates a scan code which is translated into a key code by the kernel. If xev doesn't show you any keycodes I guess those scan codes aren't mapped and so the kernel has no knowledge of them.
http://gentoo-wiki.com/HOWTO_Use_Multimedia_Keys has a nice explanation of finding key codes and offers help on how you can find raw scan codes and translate them into key codes.
A: xev might help you if you want to find out which codes are being sent by multimedia keys.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How can I horizontally center an element? How can I horizontally center a <div> within another <div> using CSS?
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: Use the below CSS content for #inner div:
#inner {
width: 50%;
margin-left: 25%;
}
I mostly use this CSS content to center divs.
A: It's possible using CSS 3 Flexbox. You have two methods when using Flexbox.
*
*Set the parent display:flex; and add properties {justify-content:center; ,align-items:center;} to your parent element.
#outer {
display: flex;
justify-content: center;
align-items: center;
}
<div id="outer" style="width:100%">
<div id="inner">Foo foo</div>
</div>
*Set the parent display:flex and add margin:auto; to the child.
#outer {
display: flex;
}
#inner {
margin: auto;
}
<div id="outer" style="width:100%">
<div id="inner">Foo foo</div>
</div>
A: CSS 3:
You can use the following style on the parent container to distribute child elements evenly horizontally:
display: flex;
justify-content: space-between; // <-- space-between or space-around
A nice DEMO regarding the different values for justify-content.
CanIUse: Browser-Compatability
Try it!:
#containerdiv {
display: flex;
justify-content: space-between;
}
#containerdiv > div {
background-color: blue;
width: 50px;
color: white;
text-align: center;
}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>JS Bin</title>
</head>
<body>
<div id="containerdiv">
<div>88</div>
<div>77</div>
<div>55</div>
<div>33</div>
<div>40</div>
<div>45</div>
</div>
</body>
</html>
A: The way I usually do it is using absolute position:
#inner{
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
position: absolute;
}
The outer div doesn't need any extra properties for this to work.
A: I recently had to center a "hidden" div (i.e., display:none;) that had a tabled form within it that needed to be centered on the page. I wrote the following jQuery code to display the hidden div and then update the CSS content to the automatic generated width of the table and change the margin to center it. (The display toggle is triggered by clicking on a link, but this code wasn't necessary to display.)
NOTE: I'm sharing this code, because Google brought me to this Stack Overflow solution and everything would have worked except that hidden elements don't have any width and can't be resized/centered until after they are displayed.
$(function(){
$('#inner').show().width($('#innerTable').width()).css('margin','0 auto');
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="inner" style="display:none;">
<form action="">
<table id="innerTable">
<tr><td>Name:</td><td><input type="text"></td></tr>
<tr><td>Email:</td><td><input type="text"></td></tr>
<tr><td>Email:</td><td><input type="submit"></td></tr>
</table>
</form>
</div>
A: #inner {
width: 50%;
margin: 0 auto;
}
A: You can attain this using the CSS Flexbox. You just need to apply 3 properties to the parent element to get everything working.
#outer {
display: flex;
align-content: center;
justify-content: center;
}
Have a look at the code below this will make you understand the properties much better.
Get to know more about CSS Flexbox
#outer {
display: flex;
align-items: center;
justify-content: center;
border: 1px solid #ddd;
width: 100%;
height: 200px;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A:
How can I horizontally center a <div> within another <div> using CSS?
Here's a non-exhaustive list of centering approaches, using:
*
*margin and auto
*margin and calc()
*padding and box-sizing and calc()
*position: absolute and negative margin-left
*position: absolute and negative transform: translateX()
*display: inline-block and text-align: center
*display: table and display: table-cell
*display: flex and justify-content: center
*display: grid and justify-items: center
1. Center a block-level element using auto for horizontal margins
.outer {
width: 300px;
height: 180px;
background-color: rgb(255, 0, 0);
}
.inner {
width: 150px;
height: 180px;
margin: 0 auto;
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
2. Center a block-level element using calc with horizontal margins
.outer {
width: 300px;
height: 180px;
background-color: rgb(255, 0, 0);
}
.inner {
width: 150px;
height: 180px;
margin: 0 calc((300px - 150px) / 2);
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
3. Center a block-level element using calc with horizontal padding + box-sizing
.outer {
width: 300px;
height: 180px;
padding: 0 calc((300px - 150px) / 2);
background-color: rgb(255, 0, 0);
box-sizing: border-box;
}
.inner {
width: 150px;
height: 180px;
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
4. Center a block-level element using position: absolute with left: 50% and negative margin-left
.outer {
position: relative;
width: 300px;
height: 180px;
background-color: rgb(255, 0, 0);
}
.inner {
position: absolute;
left: 50%;
width: 150px;
height: 180px;
margin-left: -75px;
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
5. Center a block-level element using position: absolute with left: 50% and negative transform: translateX()
.outer {
position: relative;
width: 300px;
height: 180px;
background-color: rgb(255, 0, 0);
}
.inner {
position: absolute;
left: 50%;
width: 150px;
height: 180px;
background-color: rgb(255, 255, 0);
transform: translateX(-75px);
}
<div class="outer">
<div class="inner"></div>
</div>
6. Center an element using display: inline-block and text-align: center
.outer {
position: relative;
width: 300px;
height: 180px;
text-align: center;
background-color: rgb(255, 0, 0);
}
.inner {
display: inline-block;
width: 150px;
height: 180px;
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
7. Center an element using display: table, padding and box-sizing
.outer {
display: table;
width: 300px;
height: 180px;
padding: 0 75px;
background-color: rgb(255, 0, 0);
box-sizing: border-box;
}
.inner {
display: table-cell;
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
8. Center an element using display: flex and justify-content: center
.outer {
display: flex;
justify-content: center;
width: 300px;
height: 180px;
background-color: rgb(255, 0, 0);
}
.inner {
flex: 0 0 150px;
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
9. Center an element using display: grid and justify-items: center
.outer {
display: grid;
justify-items: center;
width: 300px;
height: 180px;
background-color: rgb(255, 0, 0);
}
.inner {
width: 150px;
background-color: rgb(255, 255, 0);
}
<div class="outer">
<div class="inner"></div>
</div>
A: For Firefox and Chrome:
<div style="width:100%;">
<div style="width: 50%; margin: 0px auto;">Text</div>
</div>
For Internet Explorer, Firefox, and Chrome:
<div style="width:100%; text-align:center;">
<div style="width: 50%; margin: 0px auto; text-align:left;">Text</div>
</div>
The text-align: property is optional for modern browsers, but it is necessary in Internet Explorer Quirks Mode for legacy browsers support.
A: Use:
#outerDiv {
width: 500px;
}
#innerDiv {
width: 200px;
margin: 0 auto;
}
<div id="outerDiv">
<div id="innerDiv">Inner Content</div>
</div>
A: This is the best example to horizontally center a <div>
#outer {
display: flex;
align-items: center;
justify-content: center;
}
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div id="outer">
<div id="inner">Foo foo</div>
</div>
</body>
</html>
A: To centre an element horizontally you can use these methods:
Method 1: Using margin property
If the element is a block-level element then you can centre the element by using margin property. Set margin-left and margin-right is to auto (Shorthand - margin: 0 auto).
This will align the element to the centre horizontally.
If the element is not a block-level element then add display: block property to it.
#outer {
background-color: silver;
}
#inner {
width: max-content;
margin: 0 auto;
background-color: #f07878;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
Method 2: Using CSS flexbox
Create a flexbox container and use justify-content property and set it to center. This will align all elements horizontally to the centre of the webpage.
#outer {
display: flex;
justify-content: center;
background-color: silver;
}
#inner {
background-color: #f07878;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
Method 3: Using position absolute technique
This is a classic method to centre the element. Set postion:relative to the outer element. Set the inner element's position to absolute and left: 50%. This will push the inner element to start from the centre of the outer element. Now use the transform property and set transform: translateX(-50%) this will make the element centre horizontally.
#outer {
position: relative;
background-color: silver;
}
#inner {
position: absolute;
left: 50%;
transform: translateX(-50%);
background-color: #f07878;
}
<div id="outer">
<center>
<div id="inner">Foo foo</div>
</center>
</div>
A: Another solution for this without having to set a width for one of the elements is using the CSS 3 transform attribute.
#outer {
position: relative;
}
#inner {
position: absolute;
left: 50%;
transform: translateX(-50%);
}
The trick is that translateX(-50%) sets the #inner element 50 percent to the left of its own width. You can use the same trick for vertical alignment.
Here's a Fiddle showing horizontal and vertical alignment.
More information is on Mozilla Developer Network.
A: Make it simple!
#outer {
display: flex;
justify-content: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: Center an Element Without Need of a Wrapper/Parent with Dynamic Height & Width
No side effect: It will not limit a centered element's width less than the viewport width, when using margins in Flexbox inside a centered element
position: fixed;
top: 0; left: 0;
transform: translate(calc(50vw - 50%));
Horizontally + vertically center, if its height is same as the width:
position: fixed;
top: 0; left: 0;
transform: translate(calc(50vw - 50%), calc(50vh - 50%));
A: Chris Coyier who wrote an excellent post on 'Centering in the Unknown' on his blog. It's a roundup of multiple solutions. I posted one that isn't posted in this question. It has more browser support than the Flexbox solution, and you're not using display: table; which could break other things.
/* This parent can be any width and height */
.outer {
text-align: center;
}
/* The ghost, nudged to maintain perfect centering */
.outer:before {
content: '.';
display: inline-block;
height: 100%;
vertical-align: middle;
width: 0;
overflow: hidden;
}
/* The element to be centered, can
also be of any width and height */
.inner {
display: inline-block;
vertical-align: middle;
width: 300px;
}
A: With flexbox it is very easy to style the div horizontally and vertically centered.
#inner {
border: 0.05em solid black;
}
#outer {
border: 0.05em solid red;
width:100%;
display: flex;
justify-content: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
To align the div vertically centered, use the property align-items: center.
Other Solutions
You can apply this CSS to the inner <div>:
#inner {
width: 50%;
margin: 0 auto;
}
Of course, you don't have to set the width to 50%. Any width less than the containing <div> will work. The margin: 0 auto is what does the actual centering.
If you are targeting Internet Explorer 8 (and later), it might be better to have this instead:
#inner {
display: table;
margin: 0 auto;
}
It will make the inner element center horizontally and it works without setting a specific width.
Working example here:
#inner {
display: table;
margin: 0 auto;
border: 1px solid black;
}
#outer {
border: 1px solid red;
width:100%
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: I recently found an approach:
#outer {
position: absolute;
left: 50%;
}
#inner {
position: relative;
left: -50%;
}
Both elements must be the same width to function correctly.
A: One of the easiest ways...
<!DOCTYPE html>
<html>
<head>
<style>
#outer-div {
width: 100%;
text-align: center;
background-color: #000
}
#inner-div {
display: inline-block;
margin: 0 auto;
padding: 3px;
background-color: #888
}
</style>
</head>
<body>
<div id ="outer-div" width="100%">
<div id ="inner-div"> I am a easy horizontally centered div.</div>
<div>
</body>
</html>
A: If you have a parent of some height say, body{height: 200px}
or like the below has parent div#outer with height 200px, then add CSS content as below
HTML:
<div id="outer">
<div id="centered">Foo foo</div>
</div>
CSS:
#outer{
display: flex;
width: 100%;
height: 200px;
}
#centered {
margin: auto;
}
Then child content, say div#centered content, will be vertically or horizontally middle, without using any position CSS. To remove vertically middle behavior then just modify to below CSS code:
#centered {
margin: 0px auto;
}
or
#outer{
display: flex;
width: 100%;
height: 200px;
}
#centered {
margin: auto;
}
<div id="outer">
<div id="centered">Foo foo</div>
</div>
Demo: https://jsfiddle.net/jinny/p3x5jb81/5/
To add only a border to show the inner div is not 100% by default:
#outer{
display: flex;
width: 100%;
height: 200px;
border: 1px solid #000000;
}
#centered {
margin: auto;
border: 1px solid #000000;
}
<div id="outer">
<div id="centered">Foo foo</div>
</div>
DEMO: http://jsfiddle.net/jinny/p3x5jb81/9
A: With Sass (SCSS syntax) you can do this with a mixin:
With translate
// Center horizontal mixin
@mixin center-horizontally {
position: absolute;
left: 50%;
transform: translate(-50%, -50%);
}
// Center horizontal class
.center-horizontally {
@include center-horizontally;
}
In an HTML tag:
<div class="center-horizontally">
I'm centered!
</div>
Remember to add position: relative; to the parent HTML element.
With Flexbox
Using flex, you can do this:
@mixin center-horizontally {
display: flex;
justify-content: center;
}
// Center horizontal class
.center-horizontally {
@include center-horizontally;
}
In an HTML tag:
<div class="center-horizontally">
<div>I'm centered!</div>
</div>
Try this CodePen!
A: To align a div within a div in middle -
.outer{
width: 300px; /* For example */
height: 300px; /* For example */
background: red;
}
.inner{
position: relative;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 200px;
height: 200px;
background: yellow;
}
<body>
<div class='outer'>
<div class='inner'></div>
</div>
</body>
This will align the internal div in the middle, both vertically and horizontally.
A: Just do this:
<div id="outer">
<div id="inner">Foo foo</div>
</div>
CSS
#outer{
display: grid;
place-items: center;
}
A: This can be done by using lots of methods. Many guys'/gals' given answers are correct and working properly. I'll give one more different pattern.
In the HTML file
<div id="outer">
<div id="inner">Foo foo</div>
</div>
In the CSS file
#outer{
width: 100%;
}
#inner{
margin: auto;
}
A: For example, see this link and the snippet below:
div#outer {
height: 120px;
background-color: red;
}
div#inner {
width: 50%;
height: 100%;
background-color: green;
margin: 0 auto;
text-align: center; /* For text alignment to center horizontally. */
line-height: 120px; /* For text alignment to center vertically. */
}
<div id="outer" style="width:100%;">
<div id="inner">Foo foo</div>
</div>
If you have a lot of children under a parent, so your CSS content must be like this example on fiddle.
The HTML content look likes this:
<div id="outer" style="width:100%;">
<div class="inner"> Foo Text </div>
<div class="inner"> Foo Text </div>
<div class="inner"> Foo Text </div>
<div class="inner"> </div>
<div class="inner"> </div>
<div class="inner"> </div>
<div class="inner"> </div>
<div class="inner"> </div>
<div class="inner"> Foo Text </div>
</div>
Then see this example on fiddle.
A: The best approaches are with CSS3.
The old box model (deprecated)
display: box and its properties box-pack, box-align, box-orient, box-direction etc. have been replaced by flexbox. While they may still work, they are not recommended to be used in production.
#outer {
width: 100%;
/* Firefox */
display: -moz-box;
-moz-box-pack: center;
-moz-box-align: center;
/* Safari and Chrome */
display: -webkit-box;
-webkit-box-pack: center;
-webkit-box-align: center;
/* W3C */
display: box;
box-pack: center;
box-align: center;
}
#inner {
width: 50%;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
According to your usability you may also use the box-orient, box-flex, box-direction properties.
The modern box model with Flexbox
#outer {
display: flex;
flex-direction: row;
flex-wrap: wrap;
justify-content: center;
align-items: center;
}
Read more about centering the child elements
*
*CSS Box Model Module Level 3
*Box model (CSS2)
*box-align on MDN
And this explains why the box model is the best approach:
*
*Why is the W3C box model considered better?
A: Centering only horizontally
In my experience, the best way to center a box horizontally is to apply the following properties:
The container:
*
*should have text-align: center;
The content box:
*
*should have display: inline-block;
Demo:
.container {
width: 100%;
height: 120px;
background: #CCC;
text-align: center;
}
.centered-content {
display: inline-block;
background: #FFF;
padding: 20px;
border: 1px solid #000;
}
<div class="container">
<div class="centered-content">
Center this!
</div>
</div>
See also this Fiddle!
Centering both horizontally & vertically
In my experience, the best way to center a box both vertically and horizontally is to use an additional container and apply the following properties:
The outer container:
*
*should have display: table;
The inner container:
*
*should have display: table-cell;
*should have vertical-align: middle;
*should have text-align: center;
The content box:
*
*should have display: inline-block;
Demo:
.outer-container {
display: table;
width: 100%;
height: 120px;
background: #CCC;
}
.inner-container {
display: table-cell;
vertical-align: middle;
text-align: center;
}
.centered-content {
display: inline-block;
background: #FFF;
padding: 20px;
border: 1px solid #000;
}
<div class="outer-container">
<div class="inner-container">
<div class="centered-content">
Center this!
</div>
</div>
</div>
See also this Fiddle!
A: Flexbox
display: flex behaves like a block element and lays out its content according to the flexbox model. It works with justify-content: center.
Please note: Flexbox is compatible all browsers exept Internet Explorer. See display: flex not working on Internet Explorer for a complete and up to date list of browsers compatibility.
#inner {
display: inline-block;
}
#outer {
display: flex;
justify-content: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
Text-align: center
Applying text-align: center the inline contents are centered within the line box. However since the inner div has by default width: 100% you have to set a specific width or use one of the following:
*
*display: block
*display: inline
*display: inline-block
#inner {
display: inline-block;
}
#outer {
text-align: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
Margin: 0 auto
Using margin: 0 auto is another option and it is more suitable for older browsers compatibility. It works together with display: table.
#inner {
display: table;
margin: 0 auto;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
Transform
transform: translate lets you modify the coordinate space of the CSS visual formatting model. Using it, elements can be translated, rotated, scaled, and skewed. To center horizontally it require position: absolute and left: 50%.
#inner {
position: absolute;
left: 50%;
transform: translate(-50%, 0%);
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
<center> (Deprecated)
The tag <center> is the HTML alternative to text-align: center. It works on older browsers and most of the new ones but it is not considered a good practice since this feature is obsolete and has been removed from the Web standards.
#inner {
display: inline-block;
}
<div id="outer">
<center>
<div id="inner">Foo foo</div>
</center>
</div>
A: I just use the simplest solution, but it works in all browsers:
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>center a div within a div?</title>
<style type="text/css">
*{
margin: 0;
padding: 0;
}
#outer{
width: 80%;
height: 500px;
background-color: #003;
margin: 0 auto;
}
#outer p{
color: #FFF;
text-align: center;
}
#inner{
background-color: #901;
width: 50%;
height: 100px;
margin: 0 auto;
}
#inner p{
color: #FFF;
text-align: center;
}
</style>
</head>
<body>
<div id="outer"><p>this is the outer div</p>
<div id="inner">
<p>this is the inner div</p>
</div>
</div>
</body>
</html>
A: Try out this below CSS code:
<style>
#outer {
display: inline-block;
width: 100%;
height: 100%;
text-align: center;
vertical-align: middle;
}
#outer > #inner {
display: inline-block;
font-size: 19px;
margin: 20px;
max-width: 320px;
min-height: 20px;
min-width: 30px;
padding: 14px;
vertical-align: middle;
}
</style>
Apply above CSS via below HTML code, to center horizontally and to center vertically (aka: align vertically in middle):
<div id="outer">
<div id="inner">
...These <div>ITEMS</div> <img src="URL"/> are in center...
</div>
</div>
After applying CSS & using above HTML, that section in webpage would look like this:
BEFORE applying code:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃V..Middle & H..Center ┣━1
┃ ┣━2
┃ ┣━3
┗┳━━━━━━┳━━━━━━┳━━━━━━┳━━━━━━┳┛
1 2 3 4 5
AFTER:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┣━1
┃ V..Middle & H..Center ┣━2
┃ ┣━3
┗┳━━━━━━┳━━━━━━┳━━━━━━┳━━━━━━┳┛
1 2 3 4 5
To center "inner" elements horizontally inside the "outer" wrapper, the "inner" elements (of type DIV, IMG, etc) need to have "inline" CSS properties, such as these: display:inline or display:inline-block, etc, THEN "outer" CSS property text-align:center can work on "inner" elements.
So near to minimum CSS code are these:
<style>
#outer {
width: 100%;
text-align: center;
}
#outer > .inner2 {
display: inline-block;
}
</style>
Apply above CSS via below HTML code, to center (horizontally):
<div id="outer">
<img class="inner2" src="URL-1"> <img class="inner2" src="URL-2">
</div>
After applying CSS & using above HTML, that line in webpage would look like this:
BEFORE applying code:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃┍━━━━━━━━━━┑ ┃
┃│ img URL1 │ ┃
┃┕━━━━━━━━━━┙ ┃
┃┍━━━━━━━━━━┑ ┃
┃│ img URL2 │ ┃
┃┕━━━━━━━━━━┙ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
AFTER:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┍━━━━━━━━━━┑ ┍━━━━━━━━━━┑ ┣━1
┃ │ img URL1 │ │ img URL2 │ ┣━2
┃ ┕━━━━━━━━━━┙ ┕━━━━━━━━━━┙ ┣━3
┗┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳┛
1 2 3 4 5
If you want to avoid specifying class="inner2" attribute everytime for each "inner" elements, then use such CSS in early:
<style>
#outer {
width: 100%;
text-align: center;
}
#outer > img, #outer > div {
display: inline-block;
}
</style>
So above CSS can be applied like below, to center items (horizontally) inside the "outer" wrapper:
<div id="outer">
<img src="URL-1"> Text1 <img src="URL-2"> Text2
</div>
After applying CSS & using above HTML, that line in webpage would look like this:
BEFORE applying code:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃┍━━━━━━━━┑ ┃
┃│img URL1│ ┃
┃┕━━━━━━━━┙ ┃
┃Text1 ┃
┃┍━━━━━━━━┑ ┃
┃│img URL2│ ┃
┃┕━━━━━━━━┙ ┃
┃Text2 ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━┛
AFTER:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┍━━━━━━━━━┑ ┍━━━━━━━━┑ ┣━1
┃ │img URL1 │ │img URL2│ ┣━2
┃ ┕━━━━━━━━━┙Text1┕━━━━━━━━┙Text2 ┣━3
┗┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳┛
1 2 3 4 5
The "id" attribute's unique name/value should be used only once for only one HTML element in one webpage, So CSS properties of same "id" name cannot be repeatedly used on multiple HTML elements, (some web-browser incorrectly allows to use same id on multiple elements).
So when you need many lines in same webpage, that need to show internal elements/items in center (horizontally) in that line, then you may use such CSS "class" (aka: CSS group, CSS repeater):
<style>
.outer2 {
width: 100%;
text-align: center;
}
.outer2 > div, .outer2 > div > img {
display: inline-block;
}
</style>
So above CSS can be applied like below, to center items (horizontally) inside the "outer2" wrapper:
<div class="outer2">
<div>
Line1: <img src="URL-1"> Text1 <img src="URL-2">
</div>
</div>
...
<div class="outer2">
<div>
Line2: <img src="URL-3"> Text2 <img src="URL-4">
</div>
</div>
After applying CSS & using above HTML, those lines in webpage would look like this:
BEFORE applying code:
┏━━━━━━━━━━━━━━━━━━━━━━┓
┃Line1: ┃
┃┍━━━━━━━━┑ ┃
┃│img URL1│ ┃
┃┕━━━━━━━━┙ ┃
┃Text1 ┃
┃┍━━━━━━━━┑ ┃
┃│img URL2│ ┃
┃┕━━━━━━━━┙ ┃
┗━━━━━━━━━━━━━━━━━━━━━━┛
........................
┏━━━━━━━━━━━━━━━━━━━━━━┓
┃Line2: ┃
┃┍━━━━━━━━┑ ┃
┃│img URL3│ ┃
┃┕━━━━━━━━┙ ┃
┃Text2 ┃
┃┍━━━━━━━━┑ ┃
┃│img URL4│ ┃
┃┕━━━━━━━━┙ ┃
┗━━━━━━━━━━━━━━━━━━━━━━┛
AFTER:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┍━━━━━━━━┑ ┍━━━━━━━━┑ ┣━1
┃ │img URL1│ │img URL2│ ┣━2
┃ Line1:┕━━━━━━━━┙Text1┕━━━━━━━━┙ ┣━3
┗┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳┛
1 2 3 4 5
.......................................
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┍━━━━━━━━┑ ┍━━━━━━━━┑ ┣━1
┃ │img URL3│ │img URL4│ ┣━2
┃ Line2:┕━━━━━━━━┙Text2┕━━━━━━━━┙ ┣━3
┗┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳┛
1 2 3 4 5
To vertically align in middle, we would need to use below CSS code:
<style>
.outer2 {
width: 100%;
text-align: center;
vertical-align: middle;
}
.outer2 > div, .outer2 > div > img {
display: inline-block;
vertical-align: middle;
}
</style>
So above CSS can be applied like below, to center items horizontally and to vertically align in middle of the "outer2" wrapper:
<div class="outer2">
<div>
Line1: <img src="URL-1"> Text1 <img src="URL-2">
</div>
</div>
...
<div class="outer2">
<div>
Line2: <img src="URL-3"> Text2 <img src="URL-4">
</div>
</div>
After applying CSS & using above HTML, those lines in webpage would look like this:
BEFORE applying code:
┏━━━━━━━━━━━━━━━━━━━━━━┓
┃Line1: ┃
┃┍━━━━━━━━┑ ┃
┃│img URL1│ ┃
┃┕━━━━━━━━┙ ┃
┃Text1 ┃
┃┍━━━━━━━━┑ ┃
┃│img URL2│ ┃
┃┕━━━━━━━━┙ ┃
┗━━━━━━━━━━━━━━━━━━━━━━┛
........................
┏━━━━━━━━━━━━━━━━━━━━━━┓
┃Line2: ┃
┃┍━━━━━━━━┑ ┃
┃│img URL3│ ┃
┃┕━━━━━━━━┙ ┃
┃Text2 ┃
┃┍━━━━━━━━┑ ┃
┃│img URL4│ ┃
┃┕━━━━━━━━┙ ┃
┗━━━━━━━━━━━━━━━━━━━━━━┛
AFTER:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┍━━━━━━━━┑ ┍━━━━━━━━┑ ┣━1
┃ Line1:│img URL1│Text1│img URL2│ ┣━2
┃ ┕━━━━━━━━┙ ┕━━━━━━━━┙ ┣━3
┗┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳┛
1 2 3 4 5
.......................................
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┍━━━━━━━━┑ ┍━━━━━━━━┑ ┣━1
┃ Line2:│img URL3│Text2│img URL4│ ┣━2
┃ ┕━━━━━━━━┙ ┕━━━━━━━━┙ ┣━3
┗┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳┛
1 2 3 4 5
A: You can use CSS Flexbox.
#inner {
display: flex;
justify-content: center;
}
You can learn more about it on this link: https://css-tricks.com/snippets/css/a-guide-to-flexbox/
A: The easiest answer: Add margin:auto; to inner.
<div class="outer">
<div class="inner">
Foo foo
</div>
</div>
CSS code
.outer{
width: 100%;
height: 300px;
background: yellow;
}
.inner{
width: 30%;
height: 200px;
margin: auto;
background: red;
text-align: center
}
Check my CodePen link: http://codepen.io/feizel/pen/QdJJrK
A: It can also be centered horizontally and vertically using absolute positioning, like this:
#outer{
position: relative;
}
#inner{
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%)
}
A: The best known way which is used widely and work in many browsers including the old ones, is using margin as below:
#parent {
width: 100%;
background-color: #CCCCCC;
}
#child {
width: 30%; /* We need the width */
margin: 0 auto; /* This does the magic */
color: #FFFFFF;
background-color: #000000;
padding: 10px;
text-align: center;
}
<div id="parent">
<div id="child">I'm the child and I'm horizontally centered! My daddy is a greyish div dude!</div>
</div>
Run the code to see how it works. Also, there are two important things you shouldn't forget in your CSS when you try to center this way: margin: 0 auto;. That makes it the div center as wanted. Plus don't forget width of the child, otherwise it won't get centered as expected!
A: Center a div in a div
.outer {
display: -webkit-flex;
display: flex;
//-webkit-justify-content: center;
//justify-content: center;
//align-items: center;
width: 100%;
height: 100px;
background-color: lightgrey;
}
.inner {
background-color: cornflowerblue;
padding: 2rem;
margin: auto;
//align-self: center;
}
<div class="outer">
<div class="inner">Foo foo</div>
</div>
A: Use:
<div id="parent">
<div class="child"></div>
</div>
Style:
#parent {
display: flex;
justify-content: center;
}
If you want to center it horizontally you should write as below:
#parent {
display: flex;
justify-content: center;
align-items: center;
}
A: You can use the calc method. The usage is for the div you're centering. If you know its width, let's say it's 1200 pixels, go for:
.container {
width:1200px;
margin-left: calc(50% - 600px);
}
So basically it'll add a left margin of 50% minus half the known width.
A: Here is another way to center horizontally using Flexbox and without specifying any width to the inner container. The idea is to use pseudo elements that will push the inner content from the right and the left.
Using flex:1 on pseudo element will make them fill the remaining spaces and take equal size and the inner container will get centered.
.container {
display: flex;
border: 1px solid;
}
.container:before,
.container:after {
content: "";
flex: 1;
}
.inner {
border: 1px solid red;
padding: 5px;
}
<div class="container">
<div class="inner">
Foo content
</div>
</div>
We can also consider the same situation for vertical alignment by simply changing the direction of flex to column:
.container {
display: flex;
flex-direction: column;
border: 1px solid;
min-height: 200px;
}
.container:before,
.container:after {
content: "";
flex: 1;
}
.inner {
border: 1px solid red;
padding: 5px;
}
<div class="container">
<div class="inner">
Foo content
</div>
</div>
A: The best I have used in my various projects is
<div class="outer">
<div class="inner"></div>
</div>
.outer{
width: 500px;
height: 500px;
position: relative;
background: yellow;
}
.inner{
width: 100px;
height: 100px;
background:red;
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
}
fiddle link
A: This will surely center your #inner both horizontally and vertically. This is also compatible in all browsers. I just added extra styling just to show how it is centered.
#outer {
background: black;
position: relative;
width:150px;
height:150px;
}
#inner {
background:white;
position: absolute;
left:50%;
top: 50%;
transform: translate(-50%,-50%);
-webkit-transform: translate(-50%,-50%);
-moz-transform: translate(-50%,-50%);
-o-transform: translate(-50%,-50%);
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
But of course if you only want it horizontally aligned, This may help you.
#outer {
background: black;
position: relative;
width:150px;
height:150px;
}
#inner {
background:white;
position: absolute;
left:50%;
transform: translate(-50%,0);
-webkit-transform: translate(-50%,0);
-moz-transform: translate(-50%,0);
-o-transform: translate(-50%,0);
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: You can do it by using Flexbox which is a good technique these days.
For using Flexbox you should give display: flex; and align-items: center; to your parent or #outer div element. The code should be like this:
#outer {
display: flex;
align-items: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
This should center your child or #inner div horizontally. But you can't actually see any changes. Because our #outer div has no height or in other words, its height is set to auto, so it has the same height as all of its child elements. So after a little of visual styling, the result code should be like this:
#outer {
height: 500px;
display: flex;
align-items: center;
background-color: blue;
}
#inner {
height: 100px;
background: yellow;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
You can see #inner div is now centered. Flexbox is the new method of positioning elements in horizontal or vertical stacks with CSS and it's got 96% of global browsers compatibility. So you are free to use it and if you want to find out more about Flexbox visit CSS-Tricks article. That is the best place to learn using Flexbox in my opinion.
A: This worked for me:
#inner {
position: absolute;
margin: 0 auto;
left: 0;
width: 7%;
right: 0;
}
In this code, you determine the width of the element.
A:
.outer
{
background-color: rgb(230,230,255);
width: 100%;
height: 50px;
}
.inner
{
background-color: rgb(200,200,255);
width: 50%;
height: 50px;
margin: 0 auto;
}
<div class="outer">
<div class="inner">
margin 0 auto
</div>
</div>
A: I used Flexbox or CSS grid
*
*Flexbox
#outer{
display: flex;
justify-content: center;
}
*CSS grid
#outer {
display: inline-grid;
grid-template-rows: 100px 100px 100px;
grid-template-columns: 100px 100px 100px;
grid-gap: 3px;
}
You can solve the issue in many ways.
A: This method also works just fine:
div.container {
display: flex;
justify-content: center; /* For horizontal alignment */
align-items: center; /* For vertical alignment */
}
For the inner <div>, the only condition is that its height and width must not be larger than the ones of its container.
A: The easiest way:
#outer {
width: 100%;
text-align: center;
}
#inner {
margin: auto;
width: 200px;
}
<div id="outer">
<div id="inner">Blabla</div>
</div>
A: Flex have more than 97% browser support coverage and might be the best way to solve these kind of problems within few lines:
#outer {
display: flex;
justify-content: center;
}
A: If width of the content is unknown you can use the following method. Suppose we have these two elements:
*
*.outer -- full width
*.inner -- no width set (but a max-width could be specified)
Suppose the computed width of the elements are 1000 pixels and 300 pixels respectively. Proceed as follows:
*
*Wrap .inner inside .center-helper
*Make .center-helper an inline block; it becomes the same size as .inner making it 300 pixels wide.
*Push .center-helper 50% right relative to its parent; this places its left at 500 pixels wrt. outer.
*Push .inner 50% left relative to its parent; this places its left at -150 pixels wrt. center helper which means its left is at 500 - 150 = 350 pixels wrt. outer.
*Set overflow on .outer to hidden to prevent horizontal scrollbar.
Demo:
body {
font: medium sans-serif;
}
.outer {
overflow: hidden;
background-color: papayawhip;
}
.center-helper {
display: inline-block;
position: relative;
left: 50%;
background-color: burlywood;
}
.inner {
display: inline-block;
position: relative;
left: -50%;
background-color: wheat;
}
<div class="outer">
<div class="center-helper">
<div class="inner">
<h1>A div with no defined width</h1>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.<br>
Duis condimentum sem non turpis consectetur blandit.<br>
Donec dictum risus id orci ornare tempor.<br>
Proin pharetra augue a lorem elementum molestie.<br>
Nunc nec justo sit amet nisi tempor viverra sit amet a ipsum.</p>
</div>
</div>
</div>
A:
#centered {
position: absolute;
left: 50%;
margin-left: -100px;
}
<div id="outer" style="width:200px">
<div id="centered">Foo foo</div>
</div>
Make sure the parent element is positioned, i.e., relative, fixed, absolute, or sticky.
If you don't know the width of your div, you can use transform:translateX(-50%); instead of the negative margin.
With CSS calc(), the code can get even simpler:
.centered {
width: 200px;
position: absolute;
left: calc(50% - 100px);
}
The principle is still the same; put the item in the middle and compensate for the width.
A: You can do something like this
#container {
display: table;
width: <width of your container>;
height: <height of your container>;
}
#inner {
width: <width of your center div>;
display: table-cell;
margin: 0 auto;
text-align: center;
vertical-align: middle;
}
This will also align the #inner vertically. If you don't want to, remove the display and vertical-align properties;
A: You can use one line of code, just text-align:center.
Here's an example:
#inner {
text-align: center;
}
<div id="outer" style="width:100%">
<div id="inner"><button>hello</button></div>
</div>
A: I'm sorry but this baby from the 1990s just worked for me:
<div id="outer">
<center>Foo foo</center>
</div>
Am I going to hell for this sin?
A: div{
width: 100px;
height: 100px;
margin: 0 auto;
}
For the normal thing if you are using div in a static way.
If you want a div to be centered when div is absolute to its parent, here is example:
.parentdiv{
position: relative;
height: 500px;
}
.child_div{
position: absolute;
height: 200px;
width: 500px;
left: 0;
right: 0;
margin: 0 auto;
}
A: You can add another div which has the same size of #inner and move it to the left by -50% (half of the width of #inner) and #inner by 50%.
#inner {
position: absolute;
left: 50%;
}
#inner > div {
position: relative;
left: -50%;
}
<div id="outer">
<div id="inner"><div>Foo foo</div></div>
</div>
A: In the previous examples they used margin: 0 auto, display:table and other answers used "with transform and translate".
And what about just with a tag? Everyone knows there is a <center> tag which is just not supported by HTML5. But it works in HTML5. For instance, in my old projects.
And it is working, but now not only MDN Web Docs, but other websites are advising not to use it any more. Here in Can I use you can see notes from MDN Web Docs. But whatever, there is such a way. This is just to know. Always being noticed about something is so useful.
A: Try this:
<div style="position: absolute;left: 50%;top: 50%;-webkit-transform: translate(-50%, -50%);transform: translate(-50%, -50%);"><div>Example</div></div>
A: In my case I needed to center(on screen) a dropdown menu(using flexbox for it's items) below a button that could have various locations vertically. None of the suggestions worked until I changed position from absolute to fixed, like this:
#outer {
margin: auto;
left: 0;
right: 0;
position: fixed;
}
#inner {
text-align: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
The above codes makes the dropdown to always center on the screen for devices of all sizes, no matter where the dropdown button is located vertically.
A: There are several ways to achieve it: using "flex", "positioning", "margin" and others. Assuming #outer and #inner divs given in the question:
I would recommend using "flex"
#outer {
display: flex;
justify-content: center;
align-items: center; /* if you also need vertical center */
}
Horizontal align using positioning
#outer {
position: relative;
}
#inner {
position: absolute;
left: 50%;
translate: transformX(-50%)
}
Horizontal and vertical-align using positioning
#outer {
position: relative;
}
#inner {
position: absolute;
left: 50%;
top: 50%;
translate: transform(-50%, -50%)
}
Horizontal align using margin
#inner {
width: fit-content;
margin: 0 auto;
}
A: Here is what you want in the shortest way.
JSFIDDLE
#outer {
margin - top: 100 px;
height: 500 px; /* you can set whatever you want */
border: 1 px solid# ccc;
}
#inner {
border: 1 px solid# f00;
position: relative;
top: 50 % ;
transform: translateY(-50 % );
}
A: I've created this example to show how to vertically and horizontally align.
The code is basically this:
#outer {
position: relative;
}
and...
#inner {
margin: auto;
position: absolute;
left:0;
right: 0;
top: 0;
bottom: 0;
}
And it will stay in the center even when you resize your screen.
A: You can use display: flex for your outer div and to horizontally center you have to add justify-content: center
#outer{
display: flex;
justify-content: center;
}
or you can visit w3schools - CSS flex Property for more ideas.
A: Some posters have mentioned the CSS 3 way to center using display:box.
This syntax is outdated and shouldn't be used anymore. [See also this post].
So just for completeness here is the latest way to center in CSS 3 using the Flexible Box Layout Module.
So if you have simple markup like:
<div class="box">
<div class="item1">A</div>
<div class="item2">B</div>
<div class="item3">C</div>
</div>
...and you want to center your items within the box, here's what you need on the parent element (.box):
.box {
display: flex;
flex-wrap: wrap; /* Optional. only if you want the items to wrap */
justify-content: center; /* For horizontal alignment */
align-items: center; /* For vertical alignment */
}
.box {
display: flex;
flex-wrap: wrap;
/* Optional. only if you want the items to wrap */
justify-content: center;
/* For horizontal alignment */
align-items: center;
/* For vertical alignment */
}
* {
margin: 0;
padding: 0;
}
html,
body {
height: 100%;
}
.box {
height: 200px;
display: flex;
flex-wrap: wrap;
justify-content: center;
align-items: center;
border: 2px solid tomato;
}
.box div {
margin: 0 10px;
width: 100px;
}
.item1 {
height: 50px;
background: pink;
}
.item2 {
background: brown;
height: 100px;
}
.item3 {
height: 150px;
background: orange;
}
<div class="box">
<div class="item1">A</div>
<div class="item2">B</div>
<div class="item3">C</div>
</div>
If you need to support older browsers which use older syntax for flexbox here's a good place to look.
A: Well, I managed to find a solution that maybe will fit all situations, but uses JavaScript:
Here's the structure:
<div class="container">
<div class="content">Your content goes here!</div>
<div class="content">Your content goes here!</div>
<div class="content">Your content goes here!</div>
</div>
And here's the JavaScript snippet:
$(document).ready(function() {
$('.container .content').each( function() {
container = $(this).closest('.container');
content = $(this);
containerHeight = container.height();
contentHeight = content.height();
margin = (containerHeight - contentHeight) / 2;
content.css('margin-top', margin);
})
});
If you want to use it in a responsive approach, you can add the following:
$(window).resize(function() {
$('.container .content').each( function() {
container = $(this).closest('.container');
content = $(this);
containerHeight = container.height();
contentHeight = content.height();
margin = (containerHeight - contentHeight) / 2;
content.css('margin-top', margin);
})
});
A: One option existed that I found:
Everybody says to use:
margin: auto 0;
But there is another option. Set this property for the parent div. It
works perfectly anytime:
text-align: center;
And see, child go center.
And finally CSS for you:
#outer{
text-align: center;
display: block; /* Or inline-block - base on your need */
}
#inner
{
position: relative;
margin: 0 auto; /* It is good to be */
}
A: You can just simply use Flexbox like this:
#outer {
display: flex;
justify-content: center
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
Apply Autoprefixer for all browser support:
#outer {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
width: 100%;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center
}
Or else
Use transform:
#inner {
position: absolute;
left: 50%;
transform: translate(-50%)
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
With Autoprefixer:
#inner {
position: absolute;
left: 50%;
-webkit-transform: translate(-50%);
-ms-transform: translate(-50%);
transform: translate(-50%)
}
A: #outer {postion: relative}
#inner {
width: 100px;
height: 40px;
position: absolute;
top: 50%;
margin-top: -20px; /* Half of your height */
}
A: Depending on your circumstances, the simplest solution could be:
margin: 0 auto; float: none;
A: Yes, this is short and clean code for horizontal align.
.classname {
display: box;
margin: 0 auto;
width: 500px /* Width set as per your requirement. */;
}
A: It is so simple.
Just decide what width you want to give to the inner div and use the following CSS.
CSS
.inner{
width: 500px; /* Assumed width */
margin: 0 auto;
}
A:
<div id="outer" style="width:100%;margin: 0 auto; text-align: center;">
<div id="inner">Foo foo</div>
</div>
A: After reading all the answers I did not see the one I prefer. This is how you can center an element in another.
jsfiddle - http://jsfiddle.net/josephtveter/w3sksu1w/
<p>Horz Center</p>
<div class="outterDiv">
<div class="innerDiv horzCenter"></div>
</div>
<p>Vert Center</p>
<div class="outterDiv">
<div class="innerDiv vertCenter"></div>
</div>
<p>True Center</p>
<div class="outterDiv">
<div class="innerDiv trueCenter"></div>
</div>
.vertCenter
{
position: absolute;
top:50%;
-ms-transform: translateY(-50%);
-moz-transform: translateY(-50%);
-webkit-transform: translateY(-50%);
transform: translateY(-50%);
}
.horzCenter
{
position: absolute;
left: 50%;
-ms-transform: translateX(-50%);
-moz-transform: translateX(-50%);
-webkit-transform: translateX(-50%);
transform: translateX(-50%);
}
.trueCenter
{
position: absolute;
left: 50%;
top: 50%;
-ms-transform: translate(-50%, -50%);
-moz-transform: translate(-50%, -50%);
-webkit-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
}
.outterDiv
{
position: relative;
background-color: blue;
width: 10rem;
height: 10rem;
margin: 2rem;
}
.innerDiv
{
background-color: red;
width: 5rem;
height: 5rem;
}
A: Give some width to the inner div and add margin:0 auto; in the CSS property.
A: CSS
#inner {
display: table;
margin: 0 auto;
}
HTML
<div id="outer" style="width:100%">
<div id="inner">Foo foo</div>
</div>
A: Use the below code.
HTML
<div id="outer">
<div id="inner">Foo foo</div>
</div>
CSS
#outer {
text-align: center;
}
#inner{
display: inline-block;
}
A: You can add this code:
#inner {
width: 90%;
margin: 0 auto;
text-align:center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: We could use the next CSS's class which allow center vertically and horizontally any element against its parent:
.centerElement{
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
}
A: Use this code:
<div id="outer">
<div id="inner">Foo foo</div>
</div>
#inner {
width: 50%;
margin: 0 auto;
text-align: center;
}
A: Use:
<style>
#outer{
text-align: center;
width: 100%;
}
#inner{
text-align: center;
}
</style>
A: This centralizes your inner div horizontally and vertically:
#outer{
display: flex;
}
#inner{
margin: auto;
}
For only horizontal align, change
margin: 0 auto;
and for vertical, change
margin: auto 0;
A: One of the easiest ways you can do it is by using display: flex. The outer div just needs to have display flex, and the inner needs margin: 0 auto to make it centered horizontally.
To center vertically and just center a div within another div, please look at the comments of the .inner class below
.wrapper {
display: flex;
/* Adding whatever height & width we want */
height: 300px;
width: 300px;
/* Just so you can see it is centered */
background: peachpuff;
}
.inner {
/* center horizontally */
margin: 0 auto;
/* center vertically */
/* margin: auto 0; */
/* center */
/* margin: 0 auto; */
}
<div class="wrapper">
<div class="inner">
I am horizontally!
</div>
</div>
A: CSS justify-content property
It aligns the Flexbox items at the center of the container:
#outer {
display: flex;
justify-content: center;
}
A:
#outer
{
display: grid;
justify-content: center;
}
<div id="outer">
<div id="inner">hello</div>
</div>
enter code here
A: I've seen lots and lots of answers and they are all outdated. Google already implemented a solution for this common problem, which centers the object literally in the middle no matter what happens, and YES it's responsive. So never do transform() or position manually ever again.
.HTML
...
<div class="parent">
<form> ... </form>
<div> ... </div>
</div>
.CSS
.parent {
display: grid;
place-items: center;
}
A: Recap 2022
This is a very old question so I'm just trying to report the situation today:
*
*CSS grid and flexbox are the best options you have for centering, horizontal or vertical;
*margin:auto method works well if the inner content is not a box (inline-block is okay);
*margin 50% with transform:translateX(-50%) is brute force but works allright;
*same thing with absolute positions and translateX/Y is good for horizontal and vertical centering too, many dialogs use that, stretching height to 100vh;
*the good old text-align:center with inline-block still works
*the ancient demon called "center tag" still works, actually it's the easiest way for horizontal centering. Deprecated, feared & hated by many but still;
*tables (td tags, actually) can center beautifully, horizontal and vertical, but they're also called old hat;
*these last 2 will work in email templates too (they're HTML4) if you're unlucky enough to work on one.
That's what it looks like in 2022, and I hope we'll never need more than grids and flexboxes. Those guys are the answer to all our prayers in 1999.
A: Try playing around with
margin: 0 auto;
If you want to center your text too, try using:
text-align: center;
A: If anyone would like a jQuery solution for center align these divs:
$(window).bind("load", function() {
var wwidth = $("#outer").width();
var width = $('#inner').width();
$('#inner').attr("style", "padding-left: " + wwidth / 2 + "px; margin-left: -" + width / 2 + "px;");
});
A: We can use Flexbox to achieve this really easily:
<div id="outer">
<div id="inner">Foo foo</div>
</div>
Center a div inside a div horizontally:
#outer {
display: flex;
justify-content: center;
}
Center a div inside a div vertically:
#outer {
display: flex;
align-items: center;
}
And, to completely middle the div vertically and horizontally:
#outer{
display: flex;
justify-content: center;
align-items: center;
}
A: Just add this CSS content into your CSS file. It will automatically center the content.
Align horizontally to center in CSS:
#outer {
display: flex;
justify-content: center;
}
Align-vertically + horizontal to center in CSS:
#outer {
display: flex;
justify-content: center;
align-items: center;
}
A: If you don't want to set a fixed width and don't want the extra margin, add display: inline-block to your element.
You can use:
#element {
display: table;
margin: 0 auto;
}
A: Centering a div of unknown height and width
Horizontally and vertically. It works with reasonably modern browsers (Firefox, Safari/WebKit, Chrome, Internet & Explorer & 10, Opera, etc.)
.content {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
}
<div class="content">This works with any content</div>
Tinker with it further on Codepen or on JSBin.
A: A very simple and cross-browser answer to horizontal center is to apply this rule to the parent element:
.parentBox {
display: flex;
justify-content: center
}
A: If you don't want to set a fixed width on the inner div you could do something like this:
#outer {
width: 100%;
text-align: center;
}
#inner {
display: inline-block;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
That makes the inner div into an inline element that can be centered with text-align.
A: I have applied the inline style to the inner div. Use this one:
<div id="outer" style="width:100%">
<div id="inner" style="display:table;margin:0 auto;">Foo foo</div>
</div>
A: With Grid
A pretty simple and modern way is to use display: grid:
div {
border: 1px dotted grey;
}
#outer {
display: grid;
place-items: center;
height: 50px; /* not necessary */
}
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div id="outer">
<div>Foo foo</div>
</div>
</body>
</html>
A: Set the width and set margin-left and margin-right to auto. That's for horizontal only, though. If you want both ways, you'd just do it both ways. Don't be afraid to experiment; it's not like you'll break anything.
A: It cannot be centered if you don't give it a width. Otherwise, it will take, by default, the whole horizontal space.
A: CSS 3's box-align property
#outer {
width: 100%;
height: 100%;
display: box;
box-orient: horizontal;
box-pack: center;
box-align: center;
}
A: A nice thing I recently found, mixing the use of line-height+vertical-align and the 50% left trick, you can center a dynamically sized box inside another dynamically sized box, on both the horizontal and vertical using pure CSS.
Note you must use spans (and inline-block), tested in modern browsers + Internet Explorer 8.
HTML:
<h1>Center dynamic box using only css test</h1>
<div class="container">
<div class="center">
<div class="center-container">
<span class="dyn-box">
<div class="dyn-head">This is a head</div>
<div class="dyn-body">
This is a body<br />
Content<br />
Content<br />
Content<br />
Content<br />
</div>
</span>
</div>
</div>
</div>
CSS:
.container {
position: absolute;
left: 0;
right: 0;
top: 0;
bottom: 0;
overflow: hidden;
}
.center {
position: absolute;
left: 50%;
top: 50%;
}
.center-container {
position: absolute;
left: -2500px;
top: -2500px;
width: 5000px;
height: 5000px;
line-height: 5000px;
text-align: center;
overflow: hidden;
}
.dyn-box {
display: inline-block;
vertical-align: middle;
line-height: 100%;
/* Purely asthetic below this point */
background: #808080;
padding: 13px;
border-radius: 11px;
font-family: arial;
}
.dyn-head {
background: red;
color: white;
min-width: 300px;
padding: 20px;
font-size: 23px;
}
.dyn-body {
padding: 10px;
background: white;
color: red;
}
See example here.
A: I know I'm a bit late to answering this question, and I haven't bothered to read every single answer so this may be a duplicate. Here's my take:
inner { width: 50%; background-color: Khaki; margin: 0 auto; }
A: Try this:
<div id="a">
<div id="b"></div>
</div>
CSS:
#a{
border: 1px solid red;
height: 120px;
width: 400px
}
#b{
border: 1px solid blue;
height: 90px;
width: 300px;
position: relative;
margin-left: auto;
margin-right: auto;
}
A: First of all: You need to give a width to the second div:
For example:
HTML
<div id="outter">
<div id="inner"Centered content">
</div
</div>
CSS:
#inner{
width: 50%;
margin: auto;
}
Note that if you don't give it a width, it will take the whole width of the line.
A: Instead of multiple wrappers and/or auto margins, this simple solution works for me:
<div style="top: 50%; left: 50%;
height: 100px; width: 100px;
margin-top: -50px; margin-left: -50px;
background: url('lib/loading.gif') no-repeat center #fff;
text-align: center;
position: fixed; z-index: 9002;">Loading...</div>
It puts the div at the center of the view (vertical and horizontal), sizes and adjusts for size, centers background image (vertical and horizontal), centers text (horizontal), and keeps div in the view and on top of the content. Simply place in the HTML body and enjoy.
A: The best way is using table-cell display (inner) that come exactly after a div with the display table (outer) and set vertical align for the inner div (with table-cell display) and every tag you use in the inner div placed in the center of div or page.
Note: you must set a specified height to outer
It is the best way you know without position relative or absolute, and you can use it in every browser as same.
#outer{
display: table;
height: 100vh;
width: 100%;
}
#inner{
display: table-cell;
vertical-align: middle;
text-align: center;
}
<div id="outer">
<div id="inner">
<h1>
set content center
</h1>
<div>
hi this is the best way to align your items center
</div>
</div>
</div>
A: HTML:
<div id="outer">
<div id="inner">
</div>
</div>
CSS:
#outer{
width: 500px;
background-color: #000;
height: 500px
}
#inner{
background-color: #333;
margin: 0 auto;
width: 50%;
height: 250px;
}
Fiddle.
A: Add text-align:center; to parent div
#outer {
text-align: center;
}
https://jsfiddle.net/7qwxx9rs/
or
#outer > div {
margin: auto;
width: 100px;
}
https://jsfiddle.net/f8su1fLz/
A: Just simply Margin:0px auto:
#inner{
display: block;
margin: 0px auto;
width: 100px;
}
<div id="outer" style="width:100%">
<div id="inner">Foo foo</div>
</div>
A: You can do it in a different way. See the below examples:
1. First Method
#outer {
text-align: center;
width: 100%;
}
#inner {
display: inline-block;
}
2. Second method
#outer {
position: relative;
overflow: hidden;
}
.centered {
position: absolute;
left: 50%;
}
A:
#inner {
display: table;
margin: 0 auto;
}
<div id="outer" style="width:100%">
<div id="inner">Foo foo</div>
</div>
A: The main attributes for centering the div are margin: auto and width: according to requirements:
.DivCenter{
width: 50%;
margin: auto;
border: 3px solid #000;
padding: 10px;
}
A:
#outer {
width: 160px;
padding: 5px;
border-style: solid;
border-width: thin;
display: block;
}
#inner {
margin: auto;
background-color: lightblue;
border-style: solid;
border-width: thin;
width: 80px;
padding: 10px;
text-align: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: I found a similar way with margin-left, but it can be left as well.
#inner {
width: 100%;
max-width: 65px; /* To adapt to screen width. It can be whatever you want. */
left: 65px; /* This has to be approximately the same as the max-width. */
}
A: #inner {
width: 50%;
margin: 0 auto;
}
A: You can horizontally center a <div> within another <div> by using text-align Property in CSS.
text-align: center is used to center the text of the outer div horizontally.
text-align: right is used to align the text to the right.
text-align: left is used to align the text to the left.
text-align: justify is used to stretch the lines so that each line has equal width.
div.a {
text-align: center;
}
div.b {
text-align: left;
}
div.c {
text-align: right;
}
div.d {
text-align: justify;
}
<h1>The text-align Property</h1>
<div class="a">
<h2>text-align: center:</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam semper diam at erat pulvinar, at pulvinar felis blandit. Vestibulum volutpat tellus diam, consequat gravida libero rhoncus ut.</p>
</div>
<div class="b">
<h2>text-align: left:</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam semper diam at erat pulvinar, at pulvinar felis blandit. Vestibulum volutpat tellus diam, consequat gravida libero rhoncus ut.</p>
</div>
<div class="c">
<h2>text-align: right:</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam semper diam at erat pulvinar, at pulvinar felis blandit. Vestibulum volutpat tellus diam, consequat gravida libero rhoncus ut.</p>
</div>
<div class="d">
<h2>text-align: justify:</h2>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam semper diam at erat pulvinar, at pulvinar felis blandit. Vestibulum volutpat tellus diam, consequat gravida libero rhoncus ut.</p>
</div>
A: Updated 2022 Centering Element Horizontally
1. Horizontally centering using flexbox
To horizontally center a elements like (div) need to add display:flex and justify-content:center to the element css class.
<div class="center">
<h1>I'm Horizontally center</h1>
</div>
.center{
display:flex;
justify-content:center;
}
2. Horizontally centering using margins
Below example shows how to horizontally center elements using margins and width.
.center{
width:50%;
margin:0 auto;
}
In the above code, have added width:50% and margin:0 auto so that the element equally splits the available space between the left and right margins.
3. Horizontally centering using transform
Below example shows how to horizontally center elements using position and transform properties.
.center{
position: absolute;
left: 50%;
transform: translateX(-50%);
}
*
*Firstly added position:absolute, so that element comes out from the normal document flow.
*second we added left:50%, so that element moves forward 50% towards x-axis.
*Third, we added transform:translateX(-50%), so that element comes backward 50% and align it to center.
A: Add CSS to your inner div. Set margin: 0 auto and set its width less than 100%, which is the width of the outer div.
<div id="outer" style="width:100%">
<div id="inner" style="margin:0 auto;width:50%">Foo foo</div>
</div>
This will give the desired result.
A: Flexbox Center Horizontally and Vertically Center Align an Element
.wrapper {border: 1px solid #678596; max-width: 350px; margin: 30px auto;}
.parentClass { display: flex; justify-content: center; align-items: center; height: 300px;}
.parentClass div {margin: 5px; background: #678596; width: 50px; line-height: 50px; text-align: center; font-size: 30px; color: #fff;}
<h1>Flexbox Center Horizontally and Vertically Center Align an Element</h1>
<h2>justify-content: center; align-items: center;</h2>
<div class="wrapper">
<div class="parentClass">
<div>c</div>
</div>
</div>
A: button {
margin: 0 auto;
width: fit-content;
display: block;
}
// container should have set width (for example 100%)
A:
#outer {
display:grid;
place-items:center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: I'd simply suggest using justify-content: center; when the container is displayed as flex.
and text-align: center; when it is about a text element.
check the code below and modify as per the requirements.
#content_block {
border: 1px solid black;
padding: 10px;
width: 50%;
text-align: center;
}
#container {
border: 1px solid red;
width:100%;
padding: 20px;
display: flex;
justify-content: center;
}
<div id="container">
<div id="content_block">Foo foo check</div>
</div>
A: #outer{
display: flex;
width: 100%;
height: 200px;
justify-content:center;
align-items:center;
}
A:
<html>
*{
margin: 0;
padding: 0;
}
#outer{
background: red;
width: 100%;
height: 100vh;
display: flex;
/*center For vertically*/
justify-content: center;
flex-direction: column;
/*center for horizontally*/
align-items: center;
}
#inner{
width: 80%;
height: 40px;
background: grey;
margin-top:5px;
}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>horizontally center an element</title>
</head>
<body>
<div id="outer">
<div id="inner">1</div>
<div id="inner" style="background: green;">2</div>
<div id="inner" style="background: yellow;">3</div>
</div>
</body>
</html>
A: <div class="container">
<div class="res-banner">
<img class="imgmelk" src="~/File/opt_img.jpg" >
</div>
</div>
css code
.res-banner{
width:309px;
margin: auto;
height:309px;
}
A: <div id="outer">
<div id="inner">Foo foo</div>
</div>
#outer{
display:flex;
align-items:center;
}
A: <center>
I am spoiled with the most simple center known?
</center>
A: .outer {
text-align: center;
width: 100%
}
A: <div id="outer" style="width:100%">
<div id="inner" style="text-align:center">Foo foo</div>
</div>
A: You can horizontally align any element using:
<div align=center>
(code goes here)
</div>
Or:
<!-- css here -->
.center-me {
margin: 0 auto;
}
<!-- html here -->
<div class="center-me">
(code goes here)
</div>
A:
<!DOCTYPE html>
<html>
<head>
<title>Center</title>
<style>
.outer{
text-align: center;
}
.inner{
width: 500px;
margin: 0 auto;
background: brown;
color: red;
}
</style>
</head>
<body>
<div class="outer">
<div class="inner">This DIV is centered</div>
</div>
</body>
</html>
Please try this. It will work without the HTML center tag.
A: Centering: Auto-width Margins
This box is horizontally centered by setting its right and left margin widths to "auto". This is the preferred way to accomplish horizontal centering with CSS and works very well in most browsers with CSS 2 support. Unfortunately, Internet Explorer 5/Windows does not respond to this method - a shortcoming of that browser, not the technique.
There is a simple workaround. (A pause while you fight back the nausea induced by that word.) Ready? Internet Explorer 5/Windows incorrectly applies the CSS "text-align" attribute to block-level elements. Declaring "text-align:center" for the containing block-level element (often the BODY element) horizontally centers the box in Internet Explorer 5/Windows.
There is a side effect of this workaround: the CSS "text-align" attribute is inherited, centering inline content. It is often necessary to explicitly set the "text-align" attribute for the centered box, counteracting the effects of the Internet Explorer 5/Windows workaround. The relevant CSS follows.
body {
margin: 50px 0px;
padding: 0px;
text-align: center;
}
#Content {
width: 500px;
margin: 0px auto;
text-align: left;
padding: 15px;
border: 1px dashed #333;
background-color: #EEE;
}
http://bluerobot.com/web/css/center1.html
A: You can use the link https://plnkr.co/edit/MQD5QHJe5oUVKEvHCz8p?p=preview
.outer{
display: table;
width: 100%;
height: 100%;
}
.inner {
vertical-align: middle;
}
Refer to https://v4-alpha.getbootstrap.com/examples/cover/
A: For a horizontally centered DIV:
#outer {
width: 100%;
text-align: center;
}
#inner {
display: inline-block;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
A: I think this will be a solution:
#outer {
position: absolute;
left: 50%;
}
#inner {
position: relative;
left: -50%;
}
Both elements must be the same width to function separately.
A: can use a center tag for convenience
<div id="outer">
<center>
<div id="inner">Foo foo</div>
</center>
</div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4988"
}
|
Q: Setting up Subversion on a Red Hat system I'm fairly new to the world of versioning but would like to introduce Subversion into our small development team as we're finding that more and more we are working on the same projects/files. We primarily code in PHP on a LAMP environment (Red Hat Enterprise).
What I'm looking for are some starting points to get me up and running such as installation/configuration guides for Red Hat. I use Dreamweaver CS3 (I see that Subversion functions will be introduced in CS4) so would be willing to look at other PHP IDEs with Subversion integration, do you have any preferences? I'm open to suggestions.
A: It depends on what version of RHEL you are running. Setting up Subversion in general is very easy you just have to install the binaries and run svnserve or adapt the Apache configuration.
*
*Get it: http://subversion.tigris.org/getting.html
*Install it
*svnadmin create --fs-type=fsfs
After that you have a repository which you can serve via apache or svnserve. I can recommend Apache because it scales better, is easier to maintain and allows you to access the repository via DAV.
Example configurations are here: http://svnbook.red-bean.com/en/1.0/ch06s04.html
A: Installing subversion is likely not going to be the hardest part, what's going to be the difficult part is how you access the repository. There's a variety of options (file share on the network, subversion over SSH, through an http connection). Each has their own pro's and con's. How are you currently developing? If you are all using the same webroot for instance, version control is not going to help, as you'd still be changing each others files, so you'll have to create separate sites for each developer.
As for the IDE, there's a great shell integration for Windows in the form of TortoiseSVN, which would still allow you to work with your favourite tools and still have easy access to the SVN features.
A: On a RHEL system, the easiest way to install subversion is by using yum:
yum install subversion
A: These are good for Linux + Subversion:
http://articles.slicehost.com/subversion
Plus it goes into multiple repositories, WebDAV and a lot of other things. Useful for Windows devs too as most of the info can be used in Windows too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Mac toolbar via WINE / Crossover Does anyone know if it's possible to get a Win32 application to run under wine / crossover but have the main toolbar appear as a Mac toolbar (i.e. outside the wine / crossover app)?
A: What is the "main toolbar"? In Win32, windows do not require a menu bar (ie: IE), or even a main window (!) so this is obviously not possible in general. If you really wanted to, you could send GetMenu() to the first created window, then use (something like? I haven't used the menu APIs much) GetMenuItemInfo() to fill the mac toolbar whenever the app gains focus, but I think this would be a lot of work for an 80% at best solution, not to mention I wouldn't know where to start to integrate this with WINE or crossover.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Serializing Date in Java I'm passing around some objects through web service and some of them contain java.sql.Date. Because Date doesn't have empty constructor it doesn't want to get serialized.
First part of a question is easy: what is the best way to pass a date between client and service?
Second part is bit trickier: Once I decide how to pass dates around, I can obviously declare date transient and make some wrapper class to pass dates as String or whatever, but how to apply same solution as transparently as possible to several classes that include Date?
(I have a hunch that DynamicProxy thingy might be a solution, but reading documentation on Sun's site wasn't very helpful, so if it really is something in that direction, some clarification would be appreciated)
Edit: I asked wrong question, sorry (some misunderstanding between me and coworker what is actually a problem). Problem occurs because of deserializing. So once I have date in xml format it tries to deserialize itself as GregorianCalendar. Other part of a question still remains: What is the best way to receive something (long timestamp or GregorianCalendar) and convert it to sql date, without making 10 different wrappers for 10 different classes. I'm using a NetBeans for code and wsdl generation.
A: Joda-Time
The Date class has a clunky API. A better implementation is Joda-Time.
ISO 8601
Joda-Time also allows you to convert your date in a ISO 8601 standard format (yyyy-mm-ddTHH:MM:SS.SSS). Using this standard when moving dates from server to its client has the advantage to include the full date in a readable format. When you use for example JAXB, the XML representation of a date is also this ISO standard. (see the XMLGregorianCalendar class)
A: Serializing the long returned by Date.getTime() as previously suggested will work. You should however note that if your server is in another time zone than the client, the date you'll reconstruct on the other side will be different. If you want want to reconstruct exact same date object you also need to send your time zone (TimeZone.getID()) and use it to reconstruct the date on the other side.
A: To answer the first part of your question, I would suggest a string in iso 8601 format (this is a standard for encoding dates).
For the second part, I'm not sure why you would need a proxy class? Or why you would have to extend the date class to support this. eg. would not your web service know that a certain field is a date and do the conversion from date to string and back itself? I'd need a little more information.
A: java.sql.Date extends java.util.Date
Just use getTime() to get the long value from it. This can be serialized and a new java.sql.Date(long) or new java.util.Date(long) constructed from it at the other end.
A: I've looked into the implementation of java.sql.Date and as I see it java.sql.Date is Serializable as an extension of java.util.Date.
A: one caveat with java.sql.Date that bit me recently is that it doesn't store the time portions (hours, minutes, seconds, etc) just the date portion. if you want the full timestamp you have to use java.util.Date or java.sql.Timestamp
A: I will expand on the correct answer by JeroenWyseur.
ISO 8601
The ISO 8601 standard format is absolutely the best way to serialize a date-time value for data exchange. The format is unambiguous, intuitive to peoples across cultures, and increasingly common around the world. Easy to read for both humans and machines.
2015-01-16T20:15:43+02:00
2015-01-16T18:15:43Z
The first example has an offset of two hours ahead of UTC. The second example shows the common use of Z ("Zulu") to indicate UTC, short for +00:00.
java.time
The java.util.Date & .Calendar classes bundled with Java are notoriously troublesome, confusing, and flawed. Avoid them. Instead use:
*
*java.time package, built into Java 8, inspired by Joda-Time, defined by JSR 310.
The java.time package supplants its predecessor, the Joda-Time library.
By default, both libraries use ISO 8601 for both parsing and generating String representations of date-time values.
Note that java.time extends the ISO 8601 format by appending the proper name of the time zone, such as 2007-12-03T10:15:30+01:00[Europe/Paris].
Search StackOverflow.com for many hundreds of Questions and Answers with much discussion and example code.
Avoid Count-From-Epoch
Some of the other answers recommend using a number, a count from epoch. This approach is not practical. It is not self-evident. It is not human-readable, making debugging troublesome and frustrating.
Which number is it, whole seconds as commonly used in Unix, milliseconds used in java.util.Date & Joda-Time, microseconds commonly used in databases such as Postgres, or nanoseconds used in java.time package?
Which of the couple dozen epochs, first moment of 1970 used in Unix, year 1 used in .Net & Go, "January 0, 1900" used in millions (billions?) of Excel & Lotus spreadsheets, or January 1, 2001 used by Cocoa?
See my answer on a similar question for more discussion.
LocalDate
I'm passing around some objects through web service and some of them contain java.sql.Date
The replacement for the terrible java.sql.Date class is java.time.LocalDate.
Best to avoid the legacy class entirely, but you can convert back and forth by calling new methods added to the old class: myJavaSqlDate.toLocalDate()
Serializing LocalDate
The LocalDate class implements Serializable. So you should have no problem with it automatically serializing, both marshaling and unmarshalling.
About java.time
The java.time framework is built into Java 8 and later. These classes supplant the troublesome old legacy date-time classes such as java.util.Date, Calendar, & SimpleDateFormat.
To learn more, see the Oracle Tutorial. And search Stack Overflow for many examples and explanations. Specification is JSR 310.
The Joda-Time project, now in maintenance mode, advises migration to the java.time classes.
You may exchange java.time objects directly with your database. Use a JDBC driver compliant with JDBC 4.2 or later. No need for strings, no need for java.sql.* classes.
Where to obtain the java.time classes?
*
*Java SE 8, Java SE 9, Java SE 10, Java SE 11, and later - Part of the standard Java API with a bundled implementation.
*
*Java 9 adds some minor features and fixes.
*Java SE 6 and Java SE 7
*
*Most of the java.time functionality is back-ported to Java 6 & 7 in ThreeTen-Backport.
*Android
*
*Later versions of Android bundle implementations of the java.time classes.
*For earlier Android (<26), the ThreeTenABP project adapts ThreeTen-Backport (mentioned above). See How to use ThreeTenABP….
The ThreeTen-Extra project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as Interval, YearWeek, YearQuarter, and more.
A: First, if you are using web services, it means you are serializing to XML and not your regular Java serialization (but some other library for marshaling and unmarshaling). So the question is lacking some information.
Second, if you have control over your InputStream & OutputStream try extending ObjectOutputStream and ObjectInputStream and override replaceObject() and resolveObject() and then you can implement serialization for java.sql.Date.
A: You don't need default constructor (empty) in order to serialize/deserialize date (either java.sql.Date or java.util.Date). During deserialization constructor is not called but attributes of object set directly to values from serialized data and you can use the object as it is since it deserialized.
A: You could use an encoder and decode to serialise and deserialise your objects.
Here is an example which serialises the SWT Rectangle class:
XMLEncoder encoder = new XMLEncoder(new FileOutputStream(file));
encoder.setPersistenceDelegate(
Rectangle.class,
new DefaultPersistenceDelegate(new String[]{"x", "y", "width", "height"}));
encoder.writeObject(groups);
encoder.close();
A: java.sql.Date already implements Serializable so no need to implement it :-)
As far as your main question is concerned, I'm deeply in love with JAXB as I can turn almost any XML into an object, so it might be worth your while to look into it.
A: Hmmm... Can't think of any reason why any serialized object-instance (serialized via the default java mechanism) should deserialize itself as an instance of another class as the class information should be an inherent part of the serialized data.
So it's either a problem of your (de-)serialization framework or the framework accepts any "date-like" object on the "sending end" (Calendar, java.util.Date etc. - an thus java.sql.Date too as it extends java.util.Date), "serializes" it to a String in some common date-format (so the type information is lost) and "deserializes" it back to a Calendar object on the receiving end.
So I think the simplest way to get to java.sql.Date is to do a
java.sql.Date date = new java.sql.Date(calendar.getTimeInMillis);
where you need an java.sql.Date but get the GregorianCalendar back from the "deserialization".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How helpful is knowing lambda calculus? To all the people who know lambda calculus: What benefit has it bought you, regarding programming? Would you recommend that people learn it?
A: The benefit of lambda calculus is that it's an extremely simple model of computation that is equivalent to a Turing machine. But while a Turing machine is more like assembly language, lambda calculus is more a like a high-level language. And if you learn Church encodings that will help you learn the programming technique called continuation-passing style, which is quite useful for implementing backtracking search and other neat tricks.
The main use of lambda calculus in practice is that it is a great laboratory tool for studying new programming-language ideas. If you have an idea for a new language feature, you can add the new feature to the lambda calculus and you get something that is expressive enough to program while being simple enough to study very thoroughly. This use is really more for language designers and theorists than for programmers.
Lambda calculus is also just very cool in its own right: just like knowing assembly language, it will deepen your understanding of computation. It's especially fun to program a universal turing machine in the lambda calculus. But this is foundational mathematics, not practical programming.
A: The lambda calculus is a computational model, just like the turing machine. Thus, it is useful if you need to implement a certain evaluator for a language based on this model, however, in practice, you just need the basic idea (uh. place argument semantically correct in the body of a function?) and that's about it.
A: One posible way to learn lambda calculus is
http://en.wikipedia.org/wiki/Lambda_Calculus
Or, if you want more, here is my blog dedicated to lambda calculus and stuff like that
http://weblogs.manas.com.ar/lziliani/
As every abstraction of computations, with lambda calculus you can model stuff used in most programming languages, like subtyping. For more about this, one of the best books with practical uses of lambda calculus in this sense is
http://www.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162091/ref=sr_1_1?ie=UTF8&s=books&qid=1222088714&sr=8-1
A: I found that the Lambda calculus was useful for understand how functional programming worked on a deeper level. Especially how to implement functional languages.
It has made it easier for me to understand advanced concepts like type-systems and evaluations strategies (e.g. call by name versus call by value).
I don't think one needs to know anything about the Lambda calculus to use basic functional programming techniques. However understanding the lambda calculus makes it easier to learn advanced programming theory.
A: If you want to program in any functional programming language, it's essential. I mean, how useful is it to know about Turing machines? Well, if you write C, the language paradigm is quite close to Turing machines -- you have an instruction pointer and a current instruction, and the machine takes some action in the current state, and then ambles along to the next instruction.
In a functional language, you simply can't think like that -- that's not the language paradigm. You have to think back to lambda calculus, and how terms are evaluated there. It will be much harder for you to be effective in a functional language if you don't know lambda calculus.
A: I'd also like to mention that if you're doing anything in the area of NLP, lambda calculus is at the foundation of a massive body of work in compositional semantics.
A: To be honest, learning lambda calculus before functional programming has made me realize that the two are as unrelated as C is to any imperative programming.
Lambda calculus is a functional programming language, an esoteric one, a Turing tarpit if you like; accidentally it's also the first.
The majority of functional programming languages at all do not require you to 'learn' lambda calculus, whatever that would mean, lambda calculus is insanely minimal, you can 'learn' its axioms in an under an hour. To know the results from it, like the fixedpoint theorem, the Church-Rosser Theorem et cetera is just irrelevant to functional programming.
Also, lambda-abstractions are often held to be 'functions', I disagree with that, they are algorithms, not functions, a minor difference, most 'functional languages' treat their functions more in the way classical mathematics does.
However, to for instance effectively use Haskell you do need to understand certain type systems, that's irrespective of lambda calculus, the System F type system can be applied to all 'functions' and requires no lambda abstractions at all. Commonly in maths we say f : R^2 -> R : f (x) = x^2. We could've said: f (x) = x^2 :: R -> R -> R. In fact, Haskell comes pretty close to this notation.
Lambda calculus is a theoretical formalism, Haskell's functions are really no more 'lambda abstractions' than f : f(x) = x^2 really, what makes lambda abstractions interesting is that it enables us to define what are normally seen as 'constants' as 'functions', no functional language does that because of the huge computational overhead. Haskell and alike is just a restricted form of System F's type system applied to functions as used in everyday classical maths. Functions in Haskell are certainly not the anonymous formally symbolic reduction-applicants as they are in lambda-calculus. Most functional programming languages are not symbolic reduction-based re-writing systems. Lisps are to some degree but that's a paradigm on its own and its 'lambda keyword' really doesn't satisfy calling it lambda calculus.
A: I think the use of lambda calculus with respect to programming in practice is that it is a quite minimal system that captures the essence of abstraction (or "anonymous functions" or closures, if you will). Other than that I don't think it is generally essential except when you need to implement abstraction yourself (as Tetha (114646) mentioned).
I also completely disagree with Denis Bueno (114701) who says that it is essential for functional programming. It is perfectly well possible to define, use or understand a functional language without any lambda calculus at all. In order to understand the evaluation of terms in functional languages (which, in my opinion, somewhat contradicts the use of a functional language) you will most likely be better of learning about term rewrite systems.
A: I agree with those that say it is theoretically possible to learn functional programming without learning the lambda calculus—but what's the advantage of not learning the lambda calculus? It's not as if it takes a big investment of time.
Most likely, it will help you understand functional programming better. But even if it doesn't, it's still a cool thing worth learning. The Y-combinator is a thing of beauty.
A: If you only want to be a technician and write programs to do things, then you don't really need to know lambda-calculus, finite-state machines, pushdown automata, regular expressions, context-free grammar, discrete mathematics, etc.
But if you have curiosity about the deeper mysteries underlying this stuff, you can start to wonder how these questions might be answered. The concepts are beautiful and will expand your imagination. I also think they, incidentally, make one a better practicioner.
What got me hooked was Minsky's book Computation: Finite and Infinite Machines.
A: The benefits for me is a more compact synergistic programming. Stuff tends to flow horizontally more than vertically. Plus it is very useful for prototyping simple algorithms. Don't know if I am using it to its full potential but I find it very useful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
}
|
Q: Smart design of a math parser? What is the smartest way to design a math parser? What I mean is a function that takes a math string (like: "2 + 3 / 2 + (2 * 5)") and returns the calculated value? I did write one in VB6 ages ago but it ended up being way to bloated and not very portable (or smart for that matter...). General ideas, psuedo code or real code is appreciated.
A: A pretty good approach would involve two steps. The first step involves converting the expression from infix to postfix (e.g. via Dijkstra's shunting yard) notation. Once that's done, it's pretty trivial to write a postfix evaluator.
A: You have a couple of approaches. You could generate dynamic code and execute it in order to get the answer without needing to write much code. Just perform a search on runtime generated code in .NET and there are plenty of examples around.
Alternatively you could create an actual parser and generate a little parse tree that is then used to evaluate the expression. Again this is pretty simple for basic expressions. Check out codeplex as I believe they have a math parser on there. Or just look up BNF which will include examples. Any website introducing compiler concepts will include this as a basic example.
Codeplex Expression Evaluator
A: If you have an "always on" application, just post the math string to google and parse the result. Simple way but not sure if that's what you need - but smart in some way i guess.
A: I know this is old, but I came across this trying to develop a calculator as part of a larger app and ran across some issues using the accepted answer. The links were IMMENSELY helpful in understanding and solving this problem and should not be discounted. I was writing an Android app in Java and for each item in the expression "string," I actually stored a String in an ArrayList as the user types on the keypad. For the infix-to-postfix conversion, I iterated through each String in the ArrayList, then evaluated the newly arranged postfix ArrayList of Strings. This was fantastic for a small number of operands/operators, but longer calculations were consistently off, especially as the expressions started evaluating to non-integers. In the provided link for Infix to Postfix conversion, it suggests popping the Stack if the scanned item is an operator and the topStack item has a higher precedence. I found that this is almost correct. Popping the topStack item if it's precedence is higher OR EQUAL to the scanned operator finally made my calculations come out correct. Hopefully this will help anyone working on this problem, and thanks to Justin Poliey (and fas?) for providing some invaluable links.
A: The related question Equation (expression) parser with precedence? has some good information on how to get started with this as well.
-Adam
A: I wrote a few blog posts about designing a math parser. There is a general introduction, basic knowledge about grammars, sample implementation written in Ruby and a test suite. Perhaps you will find these materials useful.
A: Assuming your input is an infix expression in string format, you could convert it to postfix and, using a pair of stacks: an operator stack and an operand stack, work the solution from there. You can find general algorithm information at the Wikipedia link.
A: ANTLR is a very nice LL(*) parser generator. I recommend it highly.
A: Developers always want to have a clean approach, and try to implement the parsing logic from ground up, usually ending up with the Dijkstra Shunting-Yard Algorithm. Result is neat looking code, but possibly ridden with bugs. I have developed such an API, JMEP, that does all that, but it took me years to have stable code.
Even with all that work, you can see even from that project page that I am seriously considering to switch over to using JavaCC or ANTLR, even after all that work already done.
A: 11 years into the future from when this question was asked: If you don't want to re-invent the wheel, there are many exotic math parsers out there.
There is one that I wrote years ago which supports arithmetic operations, equation solving, differential calculus, integral calculus, basic statistics, function/formula definition, graphing, etc.
Its called ParserNG and its free.
Evaluating an expression is as simple as:
MathExpression expr = new MathExpression("(34+32)-44/(8+9(3+2))-22");
System.out.println("result: " + expr.solve());
result: 43.16981132075472
Or using variables and calculating simple expressions:
MathExpression expr = new MathExpression("r=3;P=2*pi*r;");
System.out.println("result: " + expr.getValue("P"));
Or using functions:
MathExpression expr = new MathExpression("f(x)=39*sin(x^2)+x^3*cos(x);f(3)");
System.out.println("result: " + expr.solve());
result: -10.65717648378352
Or to evaluate the derivative at a given point(Note it does symbolic differentiation(not numerical) behind the scenes, so the accuracy is not limited by the errors of numerical approximations):
MathExpression expr = new MathExpression("f(x)=x^3*ln(x); diff(f,3,1)");
System.out.println("result: " + expr.solve());
result: 38.66253179403897
Which differentiates x^3 * ln(x) once at x=3.
The number of times you can differentiate is 1 for now.
or for Numerical Integration:
MathExpression expr = new MathExpression("f(x)=2*x; intg(f,1,3)");
System.out.println("result: " + expr.solve());
result: 7.999999999998261... approx: 8
This parser is decently fast and has lots of other functionality.
Work has been concluded on porting it to Swift via bindings to Objective C and we have used it in graphing applications amongst other iterative use-cases.
DISCLAIMER: ParserNG is authored by me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
}
|
Q: Simple web "live chat" software (LAMP stack) that integrates with Jabber/Aim I've looked for this a few times in the past, to no avail. I would like a simple php/ajax web chat interface, that, and this is the critical part, will interface with my IM client (Pidgin) ... via Jabber or Aim. Plugoo is almost what I want, except it is hosted, and flash based. Flash-based would be OK if not ideal, but hosted isn't.
Note that I don't just need notifications, but I want a user of the website who clicks "live chat" to get a chat interface and my IM client allows me to interact with them.
This is super handy for those of us that want to provide live support to clients who do not use IM.
A: (Disclaimer: I work for Jabber, Inc., the commercial company behind the product I'm about to pimp.)
(source: jabber.com)
The JabberWerx AJAX libraries do exactly what you want. You include a reference to a Javascript library, add a div tag where you want the chat to go, and add a couple lines of configuration javascript to hook the two together. There's also a one-to-one mode. User accounts can be created on the fly if you like, as well.
Sorry for the ad, but I think it's exactly what you want.
A: This wouldn't be that hard, if you implement the Oscar protocol that AIM uses. It's not very complex, and that would allow you to build a nice web based AIM client for your website. There may be a 3rd party solution that you could use, but as far as I know, Oscar is pretty trivial.
A: I think that http://www.plupper.com is exactly you are looking for
A: If you use Strophe, it should be easy to get this to work, particularly if you have a copy of Professional XMPP Programming with JavaScript and jQuery, by Jack Moffitt.
A: How much does this product cost? Re this response from Joe Hildebrand.
A: By far the fastest way I can think of would be to add a Google Talk gadget to your page. You will need a gmail account yourself, but visitors to your page don't, they can just start chatting. Google Talk is excellent with Pidgin.
If you wish to roll your own, the Jabber Wiki has a list of web clients for Jabber:
*
*http://www.jabber.org/web/Clients#Web_Browser
JWChat might do what you want.
A: For a PHP based solution you can try out building your application with Jaxl library. We use this library completely to build a hosted service at Jaxl IM which integrates with all XMPP client (pidgin, psi, gtalk) on any kind of platform (mobile, desktop, web).
All that we use to build Jaxl IM solution is out open in form of Jaxl library and can be used to build your own custom solutions. Let us know if you need any consultancy/help with your project.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: What is the fastest way to scale and display an image in Python? I am required to display a two dimensional numpy.array of int16 at 20fps or so. Using Matplotlib's imshow chokes on anything above 10fps. There obviously are some issues with scaling and interpolation. I should add that the dimensions of the array are not known, but will probably be around thirty by four hundred.
These are data from a sensor that are supposed to have a real-time display, so the data has to be re-sampled on the fly.
A: The fastest way to display 30x400 data points is to:
Use OpenGL color arrays
If you can quickly transform your data to what OpenGL understands as color array, you could create a vertex array describing quads, one for each sensor, then update your color array and draw this orthographically on screen.
Use OpenGL textures
If you can quickly transform your datapoints to an opengl texture you can draw one quad with fixed UV coordinates that is bound to this texture.
Use pygame
Pygame has support for conversion of Numpy/Numarray to surfaces, Pygame can then transform such surfaces which involves resampling, after resampling you can blit it on screen.
Misc
pyglet makes dealing with opengl very easy
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How can I copy a Delphi TTable including its calculated fields? I have defined a Delphi TTable object with calculated fields, and it is used in a grid on a form. I would like to make a copy of the TTable object, including the calculated fields, open that copy, do some changes to the data with the copy, close the copy, and then refresh the original copy and thusly the grid view. Is there an easy way to get a copy of a TTable object to be used in such a way?
The ideal answer would be one that solves the problem as generically as possible, i.e., a way of getting something like this:
newTable:=getACopyOf(existingTable);
A: You can use the TBatchMove component to copy a table and its structure.
Set the Mode property to specify the desired operation. The Source and Destination properties indicate the datasets whose records are added, deleted, or copied. The online help has additional details.
(Although I reckon you should investigate a TClientDataSet approach - it's certainly more scalable and faster).
A: Let me propose several things:
Let us suppose that you want to make changes programmatically. You could then use DisableControls and EnableControls methods of the TTable to disallow screen updates during that time.
If you want to have two screens with the same data (f.e. to compare data during online changes), you could actually create the same screen twice, with the TTable object being on the screen itself. It will have the exact same configuration (but not carry over previously made changes on the first screen but read the data from the database). Changes made on one screen will not be automatically refreshed on the other.
Another way: Try using TDataSetProvider with TTable as Dataset (source) feeding a TClientDataSet. ApplyUpdates would feed back the changes to the TTable. Since the calculated fields are read only, they are not affected. (untested, but should work)
A: You should be able to select the table on the form, copy it using Ctrl-C, then paste it into any text editor. You will get the text version of the object's properties which you can then edit as needed. When you are done, select all the text again and you can copy it to the clipboard and paste it back onto a form.
A: I believe that the second approach (TClientDataset) is probably the best method to use in this scenario. An alternative would be to use a memory table (kbmMemTable for instance). Either way, you would clone your original table and then after making your changes loop thru the memory version of your dataset and update your original table.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I troubleshoot Sharepoint workflows? I'm a bit lost here and I can't find much documentation.
A: .NET workflows or ones created using SharePoint designer?
I've only got latter experience and they aren't really easy to debug, you really just have to do them 1 step at a time and test up to each step.
You can view the workflow state as well in SharePoint when you go to the List Settings (sorry I don't have a WSS machine to look at currently).
A: Agree with @Slace. Make sure you create your workflows in SP Designer to be executable from the browser, even if eventually they'll only be kicked off by status changes. That will in itself make troubleshooting easier.
A: Assuming you are talking about SharePoint Designer workflows? You can convert them into .NET workflows following these steps.
From there, you can debug them. I haven't tried it myself (yet).
A: As stated, your only real choice for debugging SharePoint Designer workflows is by either going through the painful process of converting them into .NET worklows, or doing things like writing out the History List after every single step so you can see what is failing.
However, even .NET workflows are very difficult to debug in VS2005. In VS2008, they added the ability to more easily debug workflows developed using that environment.
A: I guess you've already seen this :-)
Troubleshoot workflow errors
A: This article has some great debugging tips for SharePoint. It gives a good general approach to development/debugging. Here are a few of the tools that are referenced:
*
*Reflector
*Fiddler
*DebugView (in fact all of the SysInternals tools)
*IE Dev Toolbar
*XsltMajic
*WinDbg
A: No one mentioned the obvious resource for SharePoint Debugging -- the ULS logs. A ULS viewer filtered on a level of "unexpected" will usually show you the logged cause for the failure.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: ASP.NET: How do I create radio buttons and databind them in a DetailsView? I have a TemplateField in a DetailsView and its input should be one of a few choices in a lookup table. Currently it's a text field, but I want it to be a group of radio buttons, and it should work in both insert and edit mode (the correct current value should be selected in edit mode).
How do I create mutually exclusive radio buttons and databind them in a DetailsView TemplateField?
I'm on ASP.NET 3.5 using an Oracle database as a datasource.
A: <EditItemTemplate>
<asp:RadioButtonList ID="RadioButtonList1" runat="server"
DataSourceID="LookupSqlDataSource" DataTextField="LOOKUPITEM_DESCRIPTION"
DataValueField="LOOKUPITEM_ID" SelectedValue='<%# Bind("ITEM_ID")%>'>
</asp:RadioButtonList>
</EditItemTemplate>
<InsertItemTemplate>
<asp:RadioButtonList ID="RadioButtonList1" runat="server"
DataSourceID="LookupSqlDataSource" DataTextField="LOOKUPITEM_DESCRIPTION"
DataValueField="LOOKUPITEM_ID" SelectedValue='<%# Bind("ITEM_ID")%>'>
</asp:RadioButtonList>
</InsertItemTemplate>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Serializing versioned workflows using Microsoft WF I have a simple business workflow with the following conditions
*
*Users need to change the workflow itself using a desinger
*The workflow is a long rinning workflow, so it will be serialized
Is there a way to automate the task of versioning different workflow assemblies?
A: The versioning of different workflow assemblies is not a trivial task and has a lot of complications. Here you can find a series of posts that deal exactly with this.
A: You can rehost the WF designer in your own application to let the end users change workflows. As you are hosting the designer you pretty much control what they can do. For example you can prevent them from removing or disabling activities and only allow them to add specific new activities in predefined area's of the workflow. The best approach is to save these workflows as XOML files and start them as such. This does mean you cannot add code to the workflow itself but you are free to define your workflow base class derived from SequentialWorkflowActivity (or the state equivalent) and use that as the workflow base class. This allows you to add code and properties. For example you can still add a CodeActivity but you need to link to code in the base class.
Workflow serialization, or dehydration as it is called, is used with running workflows to persist them to disk. This uses standard .NET binary serialization and can be a but tricky due to the long running nature of workflows. But no big deal once you know what to look for. See http://msmvps.com/blogs/theproblemsolver/archive/2008/09/10/versioning-long-running-workfows.aspx for the start of a series of blog posts.
Not sure if you need it but there is also the capability to change already executing workflows. This uses the WorkflowChanges object. See here http://wiki.windowsworkflowfoundation.eu/default.aspx/WF/RuntimeModificationOfWorkflows.html for more details.
A: Here is another article on workflow versioning:
http://www.adefwebserver.com/DotNetNukeHELP/Workflow/VacationRequest3.htm
Basically you can version workflows that use assemblies if:
*
*Any assembly used with workflows
must be strong named.
*If a assembly
uses an interface it also must be strong
named and placed in a separate
assembly.
*An entry in the web.config
can instruct asp.net where to find
the proper assembly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Catching base Exception class in .NET I keep hearing that
catch (Exception ex)
Is bad practise, however, I often use it in event handlers where an operation may for example go to network, allowing the possibility of many different types of failure. In this case, I catch all exceptions and display the error message to the user in a message box.
Is this considered bad practise? There's nothing more I can do with the exception: I don't want it to halt the application, the user needs to know what happened, and I'm at the top level of my code. What else should I be doing?
EDIT:
People are saying that I should look through the stack of calls and handle errors specifically, because for example a StackOverflow exception cannot be handled meaningfully. However, halting the process is the worst outcome, I want to prevent that at all costs. If I can't handle a StackOverflow, so be it - the outcome will be no worse than not catching exceptions at all, and in 99% of cases, informing the user is the least bad option as far as I'm concerned.
Also, despite my best efforts to work out all of the possible exceptions that can be thrown, in a large code-base it's likely that I would miss some. And for most of them the best defense is still to inform the user.
A: It makes complete sense to catch the exception at the highest level in your code. Catching the base Exception type is fine as long as you don't need to do any different logic based on the exception's type.
Also, make sure you're displaying a friendly, general error message and not showing the actual exception's message. That may lead to security vulnerabilities.
A: Yes, it is fine to catch the base Execption at the top level of the application, which is what you are doing.
The strong reactions you are getting is probably because at any other level, its almost always wrong to catch the Base exception. Specifically in an a library it would be very bad practice.
A: The bad practice is
catch (Exception ex){}
and variants:
catch (Exception ex){ return false; }
etc.
Catching all exceptions on the top-level and passing them on to the user (by either logging them or displaying them in a message-box, depending on whether you are writing a server- or a client-application), is exactly the right thing to do.
A: It is bad practice in the sense that you shouldn't do it everywhere.
In this case, I would consider it the only reasonable solution as your exception could be truly anything. The only possible improvement would be to add extra handlers before your catch everything for specific error cases where you could do something about the exception.
A: I find the arguments that generic catches are always bad to be overly dogmatic. They, like everything else, have a place.
That place is not your library code, nor the classes you custom-develop for your app. That place is, as many have mentioned, the very top level of the app, where if any exception is raised, it is most likely unexpected.
Here's my general rule (and like all rules, it's designed to be broken when appropriate):
I use classes and custom-built libraries for the majority of the lifting in an app. This is basic app architecture -- really basic, mind you. These guys try to handle as many exceptions as possible, and if they really can't continue, throw the most specific kind available back up to the UI.
At the UI, I tend to always catch all from event handlers. If there is a reasonable expectation of catching a specific exception, and I can do something about it, then I catch the specific exception and handle it gracefully. This must come before the catch all, however, as .NET will only use the very first exception handler which matches your exception. (Always order from most specific to most generic!)
If I can't do anything about the exception other than error out (say, the database is offline), or if the exception truly is unexpected, catch all will take it, log it, and fail safe quickly, with a general error message displayed to the user before dying. (Of course, there are certain classes of errors which will almost always fail ungracefully -- OutOfMemory, StackOverflow, etc. I'm fortunate enough to have not had to deal with those in prod-level code ... so far!)
Catch all has its place. That place is not to hide the exception, that place is not to try and recover (because if you don't know what you caught, how can you possibly recover), that place is not to prevent errors from showing to the user while allowing your app to continue executing in an unknown and bad state.
Catch all's place is to be a last resort, a trap to ensure that if anything makes it through your well-designed and well-guarded defenses, that at a minimum it's logged appropriately and a clean exit can be made. It is bad practice if you don't have well-designed and well-guarded defenses in place at lower levels, and it is very bad practice at lower levels, but done as a last resort it is (in my mind) not only acceptable, but often the right thing to do.
A: It's perfectly okay if you re-raise exceptions you can't handle properly. If you just catch the exceptions you could hide bugs in the code you don't expect. If you catch exceptions to display them (and bypass the die-and-print-traceback-to-stderr behavior) that's perfectly acceptable.
A: I think the poster is referring to exception handling like this:
try {something} catch (SqlException) {do stuff} catch (Exception) {do other stuff}
The idea here is that you want to catch the more specific errors (like SqlException) first and handle them appropriately, rather than always relying on the catch-all general Exception.
The conventional wisdom says that this is the proper way to do exception handling (and that a solo Catch (Exception ex) is bad). In practice this approach doesn't always work, especially when you're working with components and libraries written by someone else.
These components will often throw a different type of exception in production than the one your code was expecting based on how the component behaved in your development environment, even though the underlying problem is the same in both environments. This is an amazingly common problem in ASP.NET, and has often led me to use a naked Catch (Exception ex) block, which doesn't care what type of exception is thrown.
Structured exception handling is a great idea in theory. In practice, it can still be a great idea within the code domain that you control. Once you introduce third party stuff, it sometimes doesn't work very well.
A: We use Catch ex as Exception (VB.Net variant) quite a bit. We log it, and examine our logs regularly. Track down the causes, and resolve.
I think Catch ex as Exception is completely acceptabile once you are dealing with production code, AND you have a general way to handle unknown exceptions gracefully. Personally I don't put the generic catch in until I've completed a module / new functionality and put in specialized handling for any exceptions I found in testing. That seems to be the best of both worlds.
A: When I see
catch (Exception ex)
my hand starts to groping for a hammer. There are almost no excuses to catch base Exception. Only valid cases that come to my mind are:
1) 3rd party component throws Exception (be damned it's author)
2) Very top level exceptions handling (as a last resort) (for example handle "unhandled" exceptions in WinForms app)
If you find a case where many different types of exceptions can happen it's a good sign of bad design.
I would disagree with Armin Ronacher. How would you behave if StackOverflow exception raised? Trying to perform additional actions can lead to even worse consequences. Catch exception only if you can handle it in meaningful and safe way. Catching System.Exception to cover range of possible exceptions is terribly wrong. Even when you are re-throwing it.
A: No; in that case if you don't want to halt the program there's nothing else you can do and at the top level is the right place to do it, as long as you're logging properly and not hiding it away in hope grin
A: The important thing is to understand the path of exceptions through your application, and not just throw or catch them arbitrarily. For example, what if the exception you catch is Out-Of-Memory? Are you sure that your dialog box is going to display in that case? But it is certainly fine to define a last-ditch exception point and say that you never want errors to propagate past that point.
A: You should catch the exceptions related to what you are doing. If you look at the methods you call, you will see what exceptions they throw, and you want to stay more specific to those. You should have access to know what exceptions may be thrown by the methods you call, and handle them appropriately.
And... better than having one big try catch, do your try and catch where you need the catch.
try {
myThing.DoStuff();
}
catch (StuffGoneWrongException ex) {
//figure out what you need to do or bail
}
Maybe not quite this closely packed, but it depends on what you are doing. Remember, the job isn't just to compile it and put it on someones desktop, you want to know what breaks if something did and how to fix it. (Insert rant about tracing here)
A: a lot of times exception are catched to free resources, it' s not important if exception is (re)thrown. in these cases you can avoid try catch:
1) for Disposable object you can use "using" keyword:
using(SqlConnection conn = new SqlConnection(connStr))
{
//code
}
once you are out of the using scope (normally or by a return statement or by exception), Dispsose method is automatically called on object. in other word, it' s like try/finally construct.
2) in asp.net, you can intercept Error or UnLoad event of Page object to free your resource.
i hope i help you!
A: I'm responding to "However, halting the process is the worst outcome..."
If you can handle an exception by running different code (using try/catch as control flow), retrying, waiting and retrying, retrying with an different but equivalent technique (ie fallback method) then by all means do so.
It is also nice to do error message replacement and logging, unless it is that pseudo-polite-passive-aggressive "contact your administrator" (when you know there is no administrator and if there was the administrator can't do anything about it!) But after you do that, the application should end, i.e. same behavior you get with an unhandled exception.
On the other hand, if you intend to handle the exception by returning the user to a code thread that has potentially trashed its state, I'd say that is worse than ending the application and letting the user start over. Is it better for the user to have to restart at the beginning or better to let the user destroy data?
If I get an unexpected exception in the module that determines which accounts I can withdraw money from, do I really want to log and report an Exception and return the user to the withdraw money screen? For all we know we just granted him the right to withdraw money from all accounts!
A: This is all good of catching exceptions that you can handled. But sometimes it also happens that due to unstable environment or users just do the process correctly, the application runs into unexpected exception. Which you haven't been listed or handled in code. Is there a way that the unhandled exception is captured from app.config file and displays a common error?
Also puts that details exception message in a log file.
A: I've been working a fair bit with exceptions, and here's the implementation structure I'm currently following:
*
*Dim everything to Nothing / String.Empty / 0 etc. outside of Try / Catch.
*Initialise everything inside Try / Catch to desired values.
*Catch the most specific exceptions first, e.g. FormatException but leave in base Exception handling as a last resort (you can have multiple catch blocks, remember)
*Almost always Throw exceptions
*Let Application_Error sub in global.asax handle errors gracefully, e.g. call a custom function to log the details of the error to a file and redirect to some error page
*Kill all objects you Dim'd in a Finally block
One example where I thought it was acceptable to not process an exception 'properly' recently was working with a GUID string (strGuid) passed via HTTP GET to a page. I could have implemented a function to check the GUID string for validity before calling New Guid(strGuid), but it seemed fairly reasonable to:
Dim myGuid As Guid = Nothing
Try
myGuid = New Guid(strGuid)
'Some processing here...
Catch ex As FormatException
lblError.Text = "Invalid ID"
Catch ex As Exception
Throw
Finally
If myGuid IsNot Nothing Then
myGuid = Nothing
End If
End Try
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Any experience with unusual technologies? 99 bottles of beers made me realize that ADA, Erlang and Smalltalk were not so odd languages after all.
There are plenty of unusual tools and I supposed that a lot of them are even used :-)
Have you ever worked with very original technologies ? If yes, let us know in which context, and what did you think about it. Funny snippets strongly expected.
A: I've been working professionally with Dyalog APL for almost three years now. It's always fun and challenging to learn a completely different language, and the language has its advantages. But I'm more annoyed than intrigued by it nowadays.
Some particular drawbacks:
*
*There's almost noone outside the office to ask if you're stuck. There's almost no resources, tips and tricks available online. And noone else in the world has probably done what you're doing anyway.
*You have to reinvent the wheel all the time, since there's really no class/function library to use. (This can be fun for a geek like me, but not very productive.)
*You constantly have to write workarounds or avoid using "modern" features, since the IDE and interpreter are closed-source, and the vendor is too small to have the resources to fixing all bugs.
A: I haven't worked with any unusual technologies but I believe Ada is still very much alive within the defence/aerospace/high reliability circles. It's something I would like to pick up one day.
A: I think the strangest thing I've 'worked' with was in grade 11 in high school. We were learning about back propogation in neural networks, and we had to do an assignment in some strange hypothetical language that our teacher had come up with.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Word 2007 Add-in Okay this question is coming from someone who has never written any code beyond CSS, HTML and some php...
Pretty much I'm using Word all day and constantly needing to refer to just a couple of sites and resources on the web.
I would like to create a little add-in to go in the Ribbon in Word.
I have the full VB 2008 Proffesional edition.
Pretty much all I'd like it to do atm is have a new tab with a few easy to access buttons which link to specific URL's, although the ideal would be that pushing these links would also automatically log me into the websites at the same time.
Possible?
From this I'll hopefully be able to work off as I learn more...
A: Yes, it is possible, check VSTO.
A: You can definitly do this as a word add-in (the auto-login part may be tricky...).
Here are some ressources:
*
*http://download.microsoft.com/download/a/6/1/a61dd5df-f52c-42d5-a95c-7a7fb7a6a466/ExtendedRibbon.wmv
*http://msdn.microsoft.com/en-us/library/aa338198.aspx
However, there are easier ways to do this. I would rather create a toolbar in my Windows taskpane.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: ReportViewer - LocalReport - Merge reports? I'm using ReportViewer WinForms, and since it is no easy way to create an coversheet, then I wonder, is it possible to render two reports and have them concatenated?, so they appear as one report?
If I was to print only, then I could execute two reports after each other, but since the user want to see the report before printing (you know, no environment waste here) then they have to appear in the same viewer.
OR, is there other ways of creating coversheets?
Today I use an subreport, but there are some issues with margins etc. which is not easy to fix.
To clarify, we are talking about
ReportViewer using RDLC files, no
Crystal Reports involved.
A: Do you need to display the 2 reports as 1 in the reportViewer control or would having them both exported to PDF and showing a single PDF containing both reports be satisfactory?
I was looking for that but using the Web ReportViewer and found examples exporting the reports to several PDFs, then concatenating the PDFs into 1 using PDFtk (free)
*
*Blog post about using PDFtk and Reporting Services
*Multiple RDLC reports displayed at the same time
*PDFtk web site
A: I've created a report that sounds like what you are attempting to do...first to clarify, I'm going to guess your using Crystal Reports within VS2005/2008.
If that's the case, all you need to do in the main report is create an additional section after your section that contains the "Cover Sheet" layout/data. In the section expert for the "Cover Sheet" section (in layout view, right click on section header bar, pick section expert in pop up menu..), check off the "New Page After" option.
Edit: After your update, I see you are using RDLC reports, and from my limited exposure to those, I can't recall an easy way to get to where you want to be. Though I'm pretty sure you may be able to pass multiple reports to the same report viewer in code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I create a readable diff of two spreadsheets using git diff? We have a lot of spreadsheets (xls) in our source code repository. These are usually edited with gnumeric or openoffice.org, and are mostly used to populate databases for unit testing with dbUnit. There are no easy ways of doing diffs on xls files that I know of, and this makes merging extremely tedious and error prone.
I've tried to converting the spreadsheets to xml and doing a regular diff, but it really feels like it should be a last resort.
I'd like to perform the diffing (and merging) with git as I do with text files. How would I do this, e.g. when issuing git diff?
A: Hmmm. From the Excel menu choose Window -> Compare side by side?
A: Do you use TortoiseSVN for doing your commits and updates in subversion? It has a diff tool, however comparing Excel files is still not really user friendly. In my environment (Win XP, Office 2007), it opens up two excel files for side by side comparison.
Right click document > Tortoise SVN > Show Log > select revision > right click for "Compare with working copy".
A: Newer versions of MS Office come with Spreadsheet Compare, which performs a fairly nice diff in a GUI. It detects most kinds of changes.
A: There is a library daff (short for data diff) which helps in comparing tables, producing a summary of their diffs, and using such a summary as a patch file.
It is written in Haxe, so it can be compiled in major languages.
I have made an Excel Diff Tool in Javascript with help of this library. It works well with numbers & small strings but the output is not ideal for long strings (e.g. a long sentence with with minor character change).
A: I would use the SYLK file format if performing diffs is important. It is a text-based format, which should make the comparisons easier and more compact than a binary format. It is compatible with Excel, Gnumeric, and OpenOffice.org as well, so all three tools should be able to work well together.
SYLK Wikipedia Article
A: I know several responses have suggested exporting the file to csv or some other text format, and then comparing them. I haven't seen it mentioned specifically, but Beyond Compare 3 has a number of additional file formats that it supports. See Additional File Formats. Using one of the Microsoft Excel File Formats you can easily compare two Excel files without going through the export to another format option.
A: Use Altova DiffDog
Use diffdog's XML diff mode and Grid View to review the differences in an easy to read tabular format. Text diff'ing is MUCH HARDER for spreadsheets of any complexity. With this tool, at least two methods are viable under various circumstances.
*
*Save As .xml
To detect the differences of a simple, one sheet spreadsheet, save the Excel spreadsheets to compare as XML Spreadsheet 2003 with a .xml extension.
*Save As .xlsx
To detect the differences of most spreadsheets in a modularized document model, save the Excel spreadsheets to compare as an Excel Workbook in .xlsx form. Open the files to diff with diffdog. It informs you that the file is a ZIP archive, and asks if you want to open it for directory comparison. Upon agreeing to directory comparison, it becomes a relatively simple matter of double-clicking logical parts of the document to diff them (with the XML diff mode). Most parts of the .xslx document are XML-formatted data. The Grid View is extremely useful. It is trivial to diff individual sheets to focus the analysis on areas that are known to have changed.
Excel's propensity to tweak certain attribute names with every save is annoying, but diffdog's XML diff'ing capabilities include the ability to filter certain kinds of differences. For example, Excel spreadsheets in XML form contain row and c elements that have s attributes (style) that rename with every save. Setting up a filter like c:s makes it much easier to view only content changes.
diffdog has a lot of diff'ing capability. I've listed the XML diff modes only simply because I haven't used another tool that I liked better when it comes to differencing Excel documents.
A: You can try this free online tool - www.cloudyexcel.com/compare-excel/
It gives a good visual output online, in terms of rows added,deleted, changed etc.
Plus you donot have to install anything.
A: I've done a lot of comparing of Excel workbooks in the past. My technique works very well for workbooks with many worksheets, but it only compares cell contents, not cell formatting, macros, etc. Also, there's some coding involved but it's well worth it if you have to compare a lot of large files repeatedly. Here's how it works:
A) Write a simple dump program that steps through all worksheets and saves all data to tab-separated files. Create one file per worksheet (use the worksheet name as the filename, e.g. "MyWorksheet.tsv"), and create a new folder for these files each time you run the program. Name the folder after the excel filename and add a timestamp, e.g. "20080922-065412-MyExcelFile". I did this in Java using a library called JExcelAPI. It's really quite easy.
B) Add a Windows shell extension to run your new Java program from step A when right-clicking on an Excel file. This makes it very easy to run this program. You need to Google how to do this, but it's as easy as writing a *.reg file.
C) Get BeyondCompare. It has a very cool feature to compare delimited data by showing it in a nice table, see screenshot.
D) You're now ready to compare Excel files with ease. Right-click on Excel file 1 and run your dump program. It will create a folder with one file per worksheet. Right-click on Excel file 2 and run your dump program. It will create a second folder with one file per worksheet. Now use BeyondCompare (BC) to compare the folders. Each file represents a worksheet, so if there are differences in a worksheet BC will show this and you can drill down and do a file comparison. BC will show the comparison in a nice table layout, and you can hide rows and columns you're not interested in.
A: We faced the exact same issue in our co. Our tests output excel workbooks. Binary diff was not an option. So we rolled out our own simple command line tool. Check out the ExcelCompare project. Infact this allows us to automate our tests quite nicely. Patches / Feature requests quite welcome!
A: Quick and easy with no external tools, works well as long as the two sheets you are comparing are similar:
*
*Create a third spreadsheet
*Type =if(Sheet1!A1 <> Sheet2!A1, "X", "") in the top left cell (or equivalent: click on the actual cells to automatically have the references inserted into the formula)
*Ctrl+C (copy), Ctrl+A (select all), Ctrl+V (paste) to fill the sheet.
If the sheets are similar, this spreadsheet will be empty except for a few cells with X in them, highlighting the differences. Unzoom to 40% to quickly see what is different.
A: I have found xdocdiff WinMerge Plugin. It is a plugin for WinMerge (both OpenSource and Freeware, you doesn't need to write a VBA nor save an excel to csv or xml). It works just for the celd's contains.
This plugin supports also:
*
*.rtf Rich Text
*.docx/.docm Microsoft WORD 2007(OOXML)
*.xlsx/.xlsm Microsoft Excel 2007(OOXML)
*.pptx/.pptm Microsoft PowerPoint 2007(OOXML)
*.doc Microsoft WORD ver5.0/95/97/2000/XP/2003
*.xls Microsoft Excel ver5.0/95/97/2000/XP/2003
*.ppt Microsoft PowerPoint 97/2000/XP/2003
*.sxw/.sxc/.sxi/.sxd OpenOffice.org
*.odt/.ods/.odp/.odg Open Document
*.wj2/wj3/wk3/wk4/123 Lotus 123
*.wri Windows3.1 Write
*.pdf Adobe PDF
*.mht Web Archive
*.eml Exported files from OutlookExpress
Regard, Andres
A: I found an openoffice macro here that will invoke openoffice's compare documents function on two files. Unfortunately, openoffice's spreadsheet compare seems a little flaky; I just had the 'Reject All' button insert a superfluous column in my document.
A: xdocdiff plugin for SVN
A: If you're using Java, you could try simple-excel.
It'll diff spreadsheets using Hamcrest matchers and output something like this.
java.lang.AssertionError:
Expected: entire workbook to be equal
but: cell at "C14" contained <"bananas"> expected <nothing>,
cell at "C15" contained <"1,850,000 EUR"> expected <"1,850,000.00 EUR">,
cell at "D16" contained <nothing> expected <"Tue Sep 04 06:30:00">
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
I should qualify that we wrote that tool (like the ticked answer rolled their own).
A: If you have TortoiseSVN then you can CTRL click the two files to select them in Windows Explorer and then right-click, TortoiseSVN->Diff.
This works particularly well if you are looking for a small change in a large data set.
A: I got the problem like you so I decide to write small tool to help me out. Please check ExcelDiff_Tools. It comes with several key points:
*
*Support xls, xlsx, xlsm.
*With formula cell. It will compare both formula and value.
*I try to make UI look like standard diff text viewer with : modified, deleted, added, unchanged status.
Please take a look with image below for example:
A: I'm the co-author of a free, open-source Git extension:
https://github.com/ZoomerAnalytics/git-xltrail
It makes Git work with any Excel workbook file format without any workarounds.
A: Diff Doc may be what you're looking for.
*
*Compare documents of MS Word (DOC, DOCX etc), Excel, PDF, Rich Text (RTF), Text, HTML, XML, PowerPoint, or Wordperfect and retain formatting
*Choose any portion of any document (file) and compare it against any portion of the same or different document (file).
A: I don't know of any tools, but there are two roll-your-own solutions that come to mind, both require Excel:
*
*You could write some VBA code that steps through each Worksheet, Row, Column and Cell of the two Workbooks, reporting differences.
*If you use Excel 2007, you could save the Workbooks as Open-XML (*.xlsx) format, extract the XML and diff that. The Open-XML file is essentially just a .zip file of .xml files and manifests.
You'll end up with a lot of "noise" in either case if your spreadsheets aren't structurally "close" to begin with.
A: Convert to cvs then upload to a version control system then diff with an advanced version control diff tool. When I used perforce it had a great diff tool, but I forget the name of it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "178"
}
|
Q: Editable data grid for C# WinForms I need to present the user with a matrix of which one column is editable. What is the most appropriate control to use?
I can't use a ListView because you can only edit the first column (the label) and that's no good to me.
Is the DataGridView the way to go, or are there third party alternative components that do a better job?
A: DataGridView is the best choice as it is free and comes with .NET WinForms 2.0. You can define editable columns or read-only. Plus you can customize the appearance if required.
A: DataGridView is good.
If you prefer a prettier interface, Telerik controls are better.
A: If DataGridView will handle your needs, it's the right answer. Another option (although it seems to be unpopular around these parts!) is Infragistics NetAdvantage. The downsides to Infragistics are primarily a high cost and somewhat steep learning curve; the upsides are that these are some of the most powerful controls you'll ever find -- so if you need their flexibility, go for it.
I don't have experience with Telerik (which has been mentioned by others here), but they do seem quite good. Being that my company has invested fairly heavily in Infragistics, we're not liable to switch any time soon ...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to connect to LDAP store with VB6 I’ve got a problem with Visual Basic (6) in combination with LDAP. When I try to connect to an LDAP store, I always get errors like ‘Bad Pathname’ or ‘Table does not exist’ (depending on what the code looks like).
This is the part of the code I wrote to connect:
path = "LDAP://xx.xxx.xxx.xxx:xxx/"
Logging.WriteToLogFile "Test1", logINFO
Set conn = CreateObject("ADODB.Connection")
conn.Provider = "ADsDSOObject"
conn.Properties("User ID") = "USER_ID"
conn.Properties("Password") = "PASSWORD"
conn.Properties("Encrypt Password") = True
conn.Properties("ADSI Flag") = 34
Logging.WriteToLogFile "Test2", logINFO
conn.Open "Active Directory Provider"
Logging.WriteToLogFile "Test3", logINFO
Set rs = conn.Execute("<" & path & "ou=Some,ou=Kindof,o=Searchbase>;(objectclass=*);name;subtree")
Logging.WriteToLogFile "Test4", logINFO
The logfile shows “Test1” , “Test2”, “Test3” and then “Table does not exist”, so it’s the line “Set rs = conn.Execute(…)” where things go wrong (pretty obvious…).
In my code, I try to connect in a secure way. I found out it has nothing to do with SSL/certificates though, because it’s also not possible to establish an anonymous unsecured connection. Funny thing is: I wrote a small test app in .NET in five minutes. With that app I was able to connect (anonymously) and read results from the LDAP store, no problems at all.
Does anyone have any experience with the combination LDAP and VB6 and maybe know what could be the problem? I googled and saw some example code snippets, but unfortunately none of them worked (same error messages as result). Thanks in advance!
A: I'm not sure how much help this will be, but I use this code to access Active Directory objects.
Set oinfo = New ADSystemInfo
sDomain = Split(oinfo.DomainDNSName, ".")
'-- Get Datasets from the Active Directory
'-- Connect to Active Directory in logged in domain
con.Open "Provider=ADsDSOObject;Encrypt Password=False;Integrated Security=SSPI;Data Source=ADSDSOObject;Mode=Read;Bind Flags=0;ADSI Flag=-2147483648"
'-- Query all serviceConnectionPoints in the Active Directory
'-- that contain the keyword "urn://tavis.net/TM/Database"
'-- and return the full path to the object
Set rst = con.Execute("<LDAP://DC=" & sDomain(0) & ",DC=" & sDomain(1) & ">;(&(objectCategory=serviceConnectionPoint)(keywords=urn://tavis.net/TM/Database));Name, AdsPath;subTree")
A: 2 things:
*
*The Open() method call takes additional parameters, server/username/password
*The LDAP query you passed to Execute() should be:
"<" & path & "ou=Some/ou=Kindof/o=Searchbase>;(objectclass=*);name;subtree"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: how to find rowsize in table One of my DBs have grown closer to permitted size.
Inorder to find out the table containing the max data, i used the following query:
exec sp_MSforeachtable @command1="print '?' exec sp_spaceused '?'"
It returned the culprit table comprising the max data.
As a next step, i want to cleanup the rows based on the size. For this, i would like to order the rows based on size.
How to achieve this using a query? Are there any tools to do this?
A: This will give you a list of rows by size, just set @table and @idcol accordingly (as written it'll run against the Northwind sample)
declare @table varchar(20)
declare @idcol varchar(10)
declare @sql varchar(1000)
set @table = 'Employees'
set @idcol = 'EmployeeId'
set @sql = 'select ' + @idcol +' , (0'
select @sql = @sql + ' + isnull(datalength(' + name + '), 1)'
from syscolumns where id = object_id(@table)
set @sql = @sql + ') as rowsize from ' + @table + ' order by rowsize desc'
exec (@sql)
A: An easier approach for all table sizes is to use the stored procedure at this site.
You could alter the select statement of that stored procedure to:
SELECT *
FROM #TempTable
Order by dataSize desc
to have it ordered by size.
How do you want to cleanup? Cleanup the biggest row of a specific table? Not sure I understand the question.
EDIT (response to comment)
Assuming your eventlog has the same layout as mine (DNN eventlog):
SELECT LEN(CONVERT(nvarchar(MAX), LogProperties)) AS length
FROM EventLog
ORDER BY length DESC
A: You can also use this to get the size of the indexes and keys: (edit:sorry for wall of text, cant get the format to work)
WITH table_space_usage
( schema_name, table_name, index_name, used, reserved, ind_rows, tbl_rows )
AS (
SELECT s.Name
, o.Name
, coalesce(i.Name, 'HEAP')
, p.used_page_count * 8
, p.reserved_page_count * 8
, p.row_count
, case when i.index_id in ( 0, 1 ) then p.row_count else 0 end
FROM sys.dm_db_partition_stats p
INNER JOIN sys.objects as o
ON o.object_id = p.object_id
INNER JOIN sys.schemas as s
ON s.schema_id = o.schema_id
LEFT OUTER JOIN sys.indexes as i
on i.object_id = p.object_id and i.index_id = p.index_id
WHERE o.type_desc = 'USER_TABLE'
and o.is_ms_shipped = 0
)
SELECT t.schema_name
, t.table_name
, t.index_name
, sum(t.used) as used_in_kb
, sum(t.reserved) as reserved_in_kb
, case grouping(t.index_name)
when 0 then sum(t.ind_rows)
else sum(t.tbl_rows) end as rows
FROM table_space_usage as t
GROUP BY
t.schema_name
, t.table_name
, t.index_name
WITH ROLLUP
ORDER BY
grouping(t.schema_name)
, t.schema_name
, grouping(t.table_name)
, t.table_name
, grouping(t.index_name)
, t.index_name
A: Maybe something like this will work
delete table where id in
(
select top 100 id
from table
order by datalength(event_text) + length(varchar_column) desc
)
(since you are dealing with an event table its probably a text column you are looking at ordering on so the datalength sql command is key here)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: ReportViewer - modify toolbar? Do anyone have good ideas of how to modify the toolbar for the WinForms version of the ReportViewer Toolbar?
That is, I want to remove some buttons and varius, but it looks like the solution is to create a brand new toolbar instead of modifying the one that is there.
Like, I had to remove export to excel, and did it this way:
// Disable excel export
foreach (RenderingExtension extension in lr.ListRenderingExtensions()) {
if (extension.Name == "Excel") {
//extension.Visible = false; // Property is readonly...
FieldInfo fi = extension.GetType().GetField("m_isVisible", BindingFlags.Instance | BindingFlags.NonPublic);
fi.SetValue(extension, false);
}
}
A bit trickysh if you ask me..
For removing toolbarbuttons, an possible way was to iterate through the Control array inside the ReportViewer and change the Visible property for the buttons to hide, but it gets reset all the time, so it is not an good way..
WHEN do MS come with an new version btw?
A: Yeap. You can do that in a little tricky way.
I had a task to add more scale factors to zoom report. I did it this way:
private readonly string[] ZOOM_VALUES = { "25%", "50%", "75%", "100%", "110%", "120%", "125%", "130%", "140%", "150%", "175%", "200%", "300%", "400%", "500%" };
private readonly int DEFAULT_ZOOM = 3;
//--
public ucReportViewer()
{
InitializeComponent();
this.reportViewer1.ProcessingMode = ProcessingMode.Local;
setScaleFactor(ZOOM_VALUES[DEFAULT_ZOOM]);
Control[] tb = reportViewer1.Controls.Find("ReportToolBar", true);
ToolStrip ts;
if (tb != null && tb.Length > 0 && tb[0].Controls.Count > 0 && (ts = tb[0].Controls[0] as ToolStrip) != null)
{
//here we go if our trick works (tested at .NET Framework 2.0.50727 SP1)
ToolStripComboBox tscb = new ToolStripComboBox();
tscb.DropDownStyle = ComboBoxStyle.DropDownList;
tscb.Items.AddRange(ZOOM_VALUES);
tscb.SelectedIndex = 3; //100%
tscb.SelectedIndexChanged += new EventHandler(toolStripZoomPercent_Click);
ts.Items.Add(tscb);
}
else
{
//if there is some problems - just use context menu
ContextMenuStrip cmZoomMenu = new ContextMenuStrip();
for (int i = 0; i < ZOOM_VALUES.Length; i++)
{
ToolStripMenuItem tsmi = new ToolStripMenuItem(ZOOM_VALUES[i]);
tsmi.Checked = (i == DEFAULT_ZOOM);
//tsmi.Tag = (IntPtr)cmZoomMenu;
tsmi.Click += new EventHandler(toolStripZoomPercent_Click);
cmZoomMenu.Items.Add(tsmi);
}
reportViewer1.ContextMenuStrip = cmZoomMenu;
}
}
private bool setScaleFactor(string value)
{
try
{
int percent = Convert.ToInt32(value.TrimEnd('%'));
reportViewer1.ZoomMode = ZoomMode.Percent;
reportViewer1.ZoomPercent = percent;
return true;
}
catch
{
return false;
}
}
private void toolStripZoomPercent_Click(object sender, EventArgs e)
{
ToolStripMenuItem tsmi = sender as ToolStripMenuItem;
ToolStripComboBox tscb = sender as ToolStripComboBox;
if (tscb != null && tscb.SelectedIndex > -1)
{
setScaleFactor(tscb.Items[tscb.SelectedIndex].ToString());
}
else if (tsmi != null)
{
if (setScaleFactor(tsmi.Text))
{
foreach (ToolStripItem tsi in tsmi.Owner.Items)
{
ToolStripMenuItem item = tsi as ToolStripMenuItem;
if (item != null && item.Checked)
{
item.Checked = false;
}
}
tsmi.Checked = true;
}
else
{
tsmi.Checked = false;
}
}
}
A: Get the toolbar from ReportViewer control:
ToolStrip toolStrip = (ToolStrip)reportViewer.Controls.Find("toolStrip1", true)[0]
Add new items:
toolStrip.Items.Add(...)
A: There are a lot of properties to set which buttons would you like to see.
For example ShowBackButton, ShowExportButton, ShowFindControls, and so on. Check them in the help, all starts with "Show".
But you are right, you cannot add new buttons. You have to create your own toolbar in order to do this.
What do you mean about new version? There is already a 2008 SP1 version of it.
A: Another way would be to manipulate the generated HTML at runtime via javascript. It's not very elegant, but it does give you full control over the generated HTML.
A: For VS2013 web ReportViewer V11 (indicated as rv), the code below adds a button.
private void AddPrintBtn()
{
foreach (Control c in rv.Controls)
{
foreach (Control c1 in c.Controls)
{
foreach (Control c2 in c1.Controls)
{
foreach (Control c3 in c2.Controls)
{
if (c3.ToString() == "Microsoft.Reporting.WebForms.ToolbarControl")
{
foreach (Control c4 in c3.Controls)
{
if (c4.ToString() == "Microsoft.Reporting.WebForms.PageNavigationGroup")
{
var btn = new Button();
btn.Text = "Criteria";
btn.ID = "btnFlip";
btn.OnClientClick = "$('#pnl').toggle();";
c4.Controls.Add(btn);
return;
}
}
}
}
}
}
}
}
A: I had this question for al ong time I I found the answer after a long tie and the main source of kowledge I used was this webpega: I'd like to thank you all guys adding the code that allowed me to do it and a picture with the result.
Instead of using the ReportViewer Class, you need to create a new classs, in my case, I named it ReportViewerPlus and it goes like this:
using Microsoft.Reporting.WinForms;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace X
{
class ReportViewerPlus : ReportViewer
{
private Button boton { get; set; }
public ReportViewerPlus(Button but) {
this.boton = but;
testc(this.Controls[0]);
}
public ReportViewerPlus()
{
}
private void testc(Control item){
if(item is ToolStrip)
{
ToolStripItemCollection tsic = ((ToolStrip)item).Items;
tsic.Insert(0, new ToolStripControlHost(boton));
return;
}
for (int i = 0; i < item.Controls.Count; i++)
{
testc(item.Controls[i]);
}
}
}
}
You have to add the button directly in the constructor of the class and you can configure the button in your designer.
Here's a pic of the result, not perfect, but enough to go(safe link I swear, but I can't post my own pics, don't have enough reputation).
http://prntscr.com/5lfssj
If you look carefully in the code of the class, you'd see more or less how it works and you could make your changes and make it possible to establish it in other site of the toolbar.
Thank you so much for helping me in the past, I hope this helps lots of people!
A: Generally you are suppose to create your own toolbar if you want to modify it. Your solution for removing buttons will probably work if that is all you need to do, but if you want to add your own you should probably just bite the bullet and build a replacement.
A: You may modify reportviewer controls by CustomizeReportToolStrip method.
this example remove Page Setup Button, Page Layout Button in WinForm
public CustOrderReportForm() {
InitializeComponent();
CustomizeReport(this.reportViewer1);
}
private void CustomizeReport(Control reportControl, int recurCount = 0) {
Console.WriteLine("".PadLeft(recurCount + 1, '.') + reportControl.GetType() + ":" + reportControl.Name);
if (reportControl is Button) {
CustomizeReportButton((Button)reportControl, recurCount);
}
else if (reportControl is ToolStrip) {
CustomizeReportToolStrip((ToolStrip)reportControl, recurCount);
}
foreach (Control childControl in reportControl.Controls) {
CustomizeReport(childControl, recurCount + 1);
}
}
//-------------------------------------------------------------
void CustomizeReportToolStrip(ToolStrip c, int recurCount) {
List<ToolStripItem> customized = new List<ToolStripItem>();
foreach (ToolStripItem i in c.Items) {
if (CustomizeReportToolStripItem(i, recurCount + 1)) {
customized.Add(i);
}
}
foreach (var i in customized) c.Items.Remove(i);
}
//-------------------------------------------------------------
void CustomizeReportButton(Button button, int recurCount) {
}
//-------------------------------------------------------------
bool CustomizeReportToolStripItem(ToolStripItem i, int recurCount) {
Console.WriteLine("".PadLeft(recurCount + 1, '.') + i.GetType() + ":" + i.Name);
if (i.Name == "pageSetup") {
return true;
}
else if (i.Name == "printPreview") {
return true;
}
return false; ;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Potential other uses of a jabber server Beside the obvious person to person instant message chat, What else have you used a Jabber server's functionality to enable?
Edit: links to working code to really show it off are particularly useful - and will be more likely to be voted up.
A: There are unlimited uses for XMPP/Jabber.
Take any message/data you want to send somewhere else and you can use jabber. Run a centralised logging service for distributed services? You can jabber the massage.
You want to check if your services/programs are running? XMPP presence will tell you. If you add custom status messages you can see exactly what is going on.
This is why Cisco has got into the game. Picture a server farm where each blade has a built in mini jabber client. On boot up it will register it's presence to the central server as awaiting work. The central server fires off some work in it's direction and it then changes it's status to "Busy". Another blade finished it's work and changes it's status back to "Available"... rinse and repeat.
When you combine the actual jabber messages with it's Out Of Band abilities, these servers can post where the results of the job can be found.
Anything you can think of needing to pass a message can be done with XMPP to some degree. Be this person to person, program to program, or any combination.
A: You could use a Jabber server to handle/broker messages between a client application and another server application.
It can actually be pretty effective.
A: Not me but Martin Woodward used jabber to control a "build bunny" that displays the current status of the build server.
http://www.woodwardweb.com/gadgets/000434.html
A: XMPP is good for sending messages back and forth between computers that don't need to be broken into chunks. They also can't be terribly big. If you use the right library, it can be pretty easy to set up.
A: Apple implements mobileme's push service using Jabber/XMPP's subscription services to send push notifications. That is the most widespread use of Jabber for non-IM purposes I know of. This article has more details.
My friends have also built a Jabber python bot, which is kinda cute but not all that useful :-)
Edit
The most recent Next Big Thing, Google Wave, uses Jabber under the hood. Further illustrates the power of the protcol.
A: Sending messages to a web page. Proof-of-concept: esagila.com
A: I plan to use it to receive notifications from my system, such as:
*
*Process did not finish
*Report was not generated on time
*User needs help
I already receive many of these messages as email. But receiving an IM could be much more effective.
A: You might want to look at Vertebra which is...
a framework for orchestrating complex processes in a Cloud. It is designed with an emphasis on security, fault tolerance, and portability.
From the knowledge base:
Why was XMPP chosen for Vertebra?
A: XMPP based instant messaging can be a good alternative to search engines for information that is small, complete in itself and required frequently and repeatedly. For example, your daily horoscope - you require it daily and it is not large.
To see an example of this add astro@askme.im to your list of contacts in your jabber client (Gmail Chat/Gtalk/or any other Jabber client) and then initiate chat with this contact by sending the word "help".
Also see www.askme.im for a whole list of chat based solutions.
A: I've used Jabber in the past to get email notifications. Nowadays I use it for low-priority nagios notifications, it is very useful and way cheaper than SMS:
A: We have used XMPP and BOSH to enable users to communicate with a webbrowser directly and in realtime from their phone.
For example Code you can view our open source API
The vooices site also has live examples where you can control a map and play a game using your phone via your web browser: http://www.vooices.us/
A: We use xmpp as both a 'bus' and a real-time API at http://superfeedr.com
A: Iowa State University Department of Agronomy has created this with Jabber: http://mesonet.agron.iastate.edu/iembot/
If you're a weather freak like I am, this is VERY cool stuff!
A: I've always thought XMPP would be a good way to deliver SNMP data. OIDs are really painful, much of the system is insecure, and the SNMP traps never work quite like you want them to. With an XMPP server in the middle and a smart component to make some choices, you can use it to send out jabber or other notifications, kick off restart jobs, update web pages, or whatever else you need.
The XML data is pretty small in this case, and you can have the one XMPP server both talk to humans in message stanzas, or computers with the same protocol.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Toggling the state of a menu item I have an Eclipse RCP app I'm working on. It has some view-specific menus and one of the menu items is an item which I would like to display a tick next to when the corresponding functionality is enabled. Similarly, the next time the item is selected, the item should become unticked to reflect that the corresponding functionality is disabled.
My question is this: how do I set the toggle state of these menu items? I have an IHandler to deal with the event when the menu item is selected but I'm unsure how to update the GUI element itself.
Does the StackOverflow community have any thoughts on how I might solve this?
A: The solution involves having the command handler implement the IElementUpdater interface. The UI element can then be updated as so:
public void updateElement(UIElement element, Map parameters)
{
element.setChecked(isSelected);
}
updateElement is called as part of a UI refresh which can be invoked from the handler's execute command as so:
ICommandService service = (ICommandService) HandlerUtil
.getActiveWorkbenchWindowChecked(event).getService(
ICommandService.class);
service.refreshElements(event.getCommand().getId(), null);
Lots more info here (see Radio Button Command and Update checked state entries)
A: Maybe things have changed, maybe there's a difference between RCP and an Eclipse plug-in, but I just set the style of my action to toggle, and it handled the toggling automatically. You can see the source code on github.com.
<action
class="live_py.EnableAction"
id="live-py.enable.action"
label="Li&ve Coding"
menubarPath="pyGeneralMenu/pyNavigateGroup"
state="false"
style="toggle">
To see whether the feature is currently enabled, I just look up the action by its id, and then call isChecked().
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Adding custom headers I need to create and add custom headers to an ASP.NET 2.0 application.
The case is simulation of an SSO-login in our dev/test environment.
When I try to add headers I run into the "Not supported on this platform."
error. BigJim has a nice post on the subject here:
http://bigjimindc.blogspot.com/2007/07/ms-kb928365-aspnet-requestheadersadd.html
The root of my problem lies in the fact that I need to simulate various
persons logging into my application. Not just adding static data in a
HttpModule. I need to take values from a couple of TextBoxes and transfer
information from these into custom headers and then re-direct the user. The
HttpModule stuff happens to early in the pipeline...
Does anyone now if there exsist a simple redirect/proxy solution that one
could use in a dev environment? Or have simple/beautiful way of doing it in code?
A: One method i have used before, though a long winded approach, is NUnitASP.
This is based on the NUnit framework but intended for ASP.NET UI Testing.
It basically starts a browser in memory, and is able to manipulate the content exactly like a user would.
Using this you could view your page, enter data into textboxes and submit pages.
Hopefully that can help you do the testing you require. I've used it to test load, and spider through sites of mine to gather data.
A: If you use IIS 7 you can set the Pipeline Mode to integrated
This Setting is found in the App-Pool Properties.
A: I could be wrong, but doesn't the Response.AddHeader() method still work? Although, I agree with Oscar that a formal testing solution like NUnitASP is a good idea. Although, NUnitASP is a little dated. I still use it for some of my projects just because it still does work; it just isn't as refined or as simple as WaTiN or similar projects.
A: The browser drops the header if you do a Response.AddHeader()...
The header must be added to the orginal Request...
A: why don't use ASP.NET forms authentication model?
you define your "private folders". if you attempt to acces to a private folder without login, you automatically are redirected to a your custom login page.
here's a couple of link:
http://support.microsoft.com/kb/301240
http://www.asp.net/learn/security/tutorial-02-cs.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Stress-testing ASP.NET/IIS with WCAT I'm trying to setup a stress/load test using the WCAT toolkit included in the IIS Resources.
Using LogParser, I've processed a UBR file with configuration. It looks something like this:
[Configuration]
NumClientMachines: 1 # number of distinct client machines to use
NumClientThreads: 100 # number of threads per machine
AsynchronousWait: TRUE # asynchronous wait for think and delay
Duration: 5m # length of experiment (m = minutes, s = seconds)
MaxRecvBuffer: 8192K # suggested maximum received buffer
ThinkTime: 0s # maximum think-time before next request
WarmupTime: 5s # time to warm up before taking statistics
CooldownTime: 6s # time to cool down at the end of the experiment
[Performance]
[Script]
SET RequestHeader = "Accept: */*\r\n"
APP RequestHeader = "Accept-Language: en-us\r\n"
APP RequestHeader = "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705)\r\n"
APP RequestHeader = "Host: %HOST%\r\n"
NEW TRANSACTION
classId = 1
NEW REQUEST HTTP
ResponseStatusCode = 200
Weight = 45117
verb = "GET"
URL = "http://Url1.com"
NEW TRANSACTION
classId = 3
NEW REQUEST HTTP
ResponseStatusCode = 200
Weight = 13662
verb = "GET"
URL = "http://Url1.com/test.aspx"
Does it look OK?
I execute the controller with this command: wcctl -z StressTest.ubr -a localhost
The Client(s) is executed like this: wcclient localhost
When the client is executed, I get this error: main client thread Connect Attempt 0 Failed. Error = 10061
Has anyone in this world ever used WCAT?
A: I'd look at updating to WCat 6.3 - available here for x86 and here for x64
They've changed the settings/scenario file strucutures, which is a little painful, but should suit your needs.
A: I've just started evaluating wcat 6.3 and I'm afraid my experience has been a bit disapointing in terms of online support/community.
There is also a major bug in the wcat.wsf script - see:
http://forums.iis.net/t/1153312.aspx
I'm now struggling with getting performance counter measurement working.
A: I've had good success with WCAT, though I'm struggling with simulating NTLM connections.
I'm using 6.3, so my config files look very different from yours. Some gotchas I noted along the way:
+ Make sure you've got your firewall turned off, or holes punched through for WMI.
+ Each thing you set in the request header has a tremendous impact on throughput. Apples to apples must have the same request headers.
+ Remote calls with multiple clients work only after correcting the bug identified by sthorogood.
Once I crossed those hurdles, I got great results from WCAT. It tests quickly, repeatably, and aggressively.
Best of luck,
Kevin
A: I don't have an answer for you, but have you considered using other tools for your testing? The WCAT tools seems pretty limited and complicated to use.
OpenSTA and JMeter are good open source tools for load/stress/performance testing.
A: OpenSTA and JMeter looks very Apache like. I'm running IIS on Windows Server 2003.
A: Have you looked at the Microsoft Web Application Stress Tool?
A: for performance counter you can define -p .prf in the same command run for controller as:
wcctl -c config.txt -d distribution.txt -s script.txt -a localhost - p performance.prf
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Reading from a promiscuous network device I want to write a real-time analysis tool for wireless traffic.
Does anyone know how to read from a promiscuous (or sniffing) device in C?
I know that you need to have root access to do it. I was wondering if anyone knows what functions are necessary to do this. Normal sockets don't seem to make sense here.
A: I once had to listen on raw ethernet frames and ended up creating a wrapper for this. By calling the function with the device name, ex eth0 I got a socket in return that was in promiscuous mode.
What you need to do is to create a raw socket and then put it into promiscuous mode. Here is how I did it.
int raw_init (const char *device)
{
struct ifreq ifr;
int raw_socket;
memset (&ifr, 0, sizeof (struct ifreq));
/* Open A Raw Socket */
if ((raw_socket = socket (PF_PACKET, SOCK_RAW, htons (ETH_P_ALL))) < 1)
{
printf ("ERROR: Could not open socket, Got #?\n");
exit (1);
}
/* Set the device to use */
strcpy (ifr.ifr_name, device);
/* Get the current flags that the device might have */
if (ioctl (raw_socket, SIOCGIFFLAGS, &ifr) == -1)
{
perror ("Error: Could not retrive the flags from the device.\n");
exit (1);
}
/* Set the old flags plus the IFF_PROMISC flag */
ifr.ifr_flags |= IFF_PROMISC;
if (ioctl (raw_socket, SIOCSIFFLAGS, &ifr) == -1)
{
perror ("Error: Could not set flag IFF_PROMISC");
exit (1);
}
printf ("Entering promiscuous mode\n");
/* Configure the device */
if (ioctl (raw_socket, SIOCGIFINDEX, &ifr) < 0)
{
perror ("Error: Error getting the device index.\n");
exit (1);
}
return raw_socket;
}
Then when you have your socket you can just use select to handle packets as they arrive.
A: You could use the pcap library (see http://www.tcpdump.org/pcap.htm) which is also used by tcpdump and Wireshark.
A: On Linux you use a PF_PACKET socket to read data from a raw device, such as an ethernet interface running in promiscuous mode:
s = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL))
This will send copies of every packet received up to your socket. It is quite likely that you don't really want every packet, though. The kernel can perform a first level of filtering using BPF, the Berkeley Packet Filter. BPF is essentially a stack-based virtual machine: it handles a small set of instructions such as:
ldh = load halfword (from packet)
jeq = jump if equal
ret = return with exit code
BPF's exit code tells the kernel whether to copy the packet to the socket or not. It is possible to write relatively small BPF programs directly, using setsockopt(s, SOL_SOCKET, SO_ATTACH_FILTER, ). (WARNING: The kernel takes a struct sock_fprog, not a struct bpf_program, do not mix those up or your program will not work on some platforms).
For anything reasonably complex, you really want to use libpcap. BPF is limited in what it can do, in particular in the number of instructions it can execute per packet. libpcap will take care of splitting a complex filter up into two pieces, with the kernel performing a first level of filtering and the more-capable user-space code dropping the packets it didn't actually want to see.
libpcap also abstracts the kernel interface out of your application code. Linux and BSD use similar APIs, but Solaris requires DLPI and Windows uses something else.
A: Why wouldn't you use something like WireShark?
It is open source, so at least you could learn a few things from it if you don't want to just use it.
A: WireShark on linux has the capability to capture the PLCP (physical layer convergence protocol) header information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Should I learn/become proficient in Javascript? I am a .NET webdev using ASP.NET, C# etc... I "learned" javascript in college 5+ years ago and can do basic jobs with it. But I wonder if it is useful to become proficient in it.
Why should I learn Javascript?
Is it more advantageous then learning JQuery or a different library?
A: Hands down yes. There's a reason that Google have made such a big fuss about the V8 JS engine for Chrome, why Mozilla are working on TraceMonkey for Firefox and why Webkit have been working on Squirrelfish for a while (now Squirrelfish extreme). It's because JS is becoming more popular by the day.
A: Yes, definitely learn Javascript before you learn one of the libraries about. It's the whole walk-before-you-can-run thing.
A: Javascript is one of those languages that spending a few hours learning will probably teach you 99% of what you will ever really use. I would imagine you are at the point in your learning of javascript that you know more than enough now and just learn one or more of the frameworks now.
A: I would recommend brushing up on your non-frameworked javascript first. Refreshing/learning basic concepts of dom manipulation and what not. Like learning how to build a linked list, stack or queue in C++ before learning how to use the STL (standard template libraries).
In addition to brushing up on straight javascript, it might be good to get into a framework that doesn't abstract and change the way things work so much, for instance Prototype. You code with it very much the same way you code with straight javascript. Read through the Prototype code, learn how to make classes, and do some fancy stuff. From experience, I can say reading through the Prototype.js helped me learn alot.
After messing around a bit, then I'd say go for jQuery. If jQuery didn't, literally, change the way you write code I'd say go for it first, but learning how to build classes and js inheritance and what not can be a very important lesson for someone who wants to become fluent in JS.
A: Learning javascript is recommended for any web application developer. Why?
*
*You will better understand the possibilities, limitations and dangers related to developing a web application
*It is a boost for your career, if you are working on a web application that has a user interface.
However, learning javascript is usually a trade-off between a programming language and another. You should consider whether javascript is relevant for your career or project.
A: Make sure you add these sites to your bookmarks:
Mozilla's developer site: This contains the reference to the Javascript API in Mozilla. This will help you make sure you're writing code that Firefox understands.
IE's site in Microsoft Developer Network: The same, for IE.
W3's reference of DOM for HTML: In most web applications today, the Javascript code manipulates the DOM, which is an internal keeping track of the objects displayed on screen (but you already knew that, right ?) This is the reference to the DOM API. It is language neutral, which means it does not target Javascript, but these methods exist in Javascript too.
Douglas Crockford' site: Doug Crockford is THE MAN when it comes down to Javascript. The articles in his page are a must read. Because Javascript has closures and first-class functions, he believes it is closer to Lisp and Scheme than to other languages. And he teaches you how to greatly improve your code with these language features.
Yahoo Developer network: You may also want to check this. I'm not a regular visitor to this site, though, so I can't really say much about it.
A: Yes, absolutely you should learn JavaScript if you are doing web development. I highly recommend JavaScript: The Good Parts, by Doug Crockford. And, JQuery is a great framework to use (this site uses it) -- it kind of depends on what you are trying to do -- YUI and ExtJS are also very nice.
A: The answer is simple.
A: Unless you want to really get into javascript, I think you'd be better off learning enough JS to leverage one of the tried and tested javascript libraries out there.
A: One thing nice about JavaScript is that it is quite different from mainstream languages such as C#, VB.NET or Java. Learning it, especially if you have occasions to use it, will give you another insight on programming, and that's always good. I think it's worth learning it.
A: If you are doing web development then at some point you are going to get exposed to Javascript or ECMAScript at some point in your career for any one of a number of reasons. At a minimum you should know enough Javascript to be able to be able to validate user input; however, the web is moving in the direction of using more an more Ajax so you should also know enough Javascript to properly leverage one of the major libraries out there such as jQuery.
As some of the other users have noted, you can learn most of what you need of Javascript on a day to day basis in a single day or a couple of afternoons. If you want to get more advanced with Javascript then you are going to have to invest much more time in learning the language but odds are that unless you seek out this type of work that you are not going to encounter something that a preexisting library doesn't already exist for.
A: If all you want is to do some simple UI-effects and the like, I suggest you just pick a library and go for it!
Using libraries eliminates all the flawed implementations of JavaScript and provides you with an API which is the same across all browsers. And if you're working together with others it is also a great way of implementing code-standards and best practices.
A: Learning a second programming language is always good.
By the sound of it, JavaScript is a language that you use, to it will be of practical use too. As a web dev, it has been recommended to me in a review that i learn at least basic JavaScript.
A library such as jQuery is essential for web development thse days, so you could learn that too.
A: I don't think a lot of deliberate learning makes sense (but of course you need some basic knowledge), but I also think after some years of web development you'll become pretty proficient in the language anyway :)
A: If you are a webdev then yes, you should be proficient with Javascript. Javascript is a major part of making web apps as interactive as desktop apps.
With that being said, learn to use one of the cross-browser compatible libraries like JQuery, Prototype, etc. We do not need to have any more single browser crud created using Javascript, just because any real man/woman rolls their own.
A few things to learn in Javascript:
1. Basic syntax
2. The various flavours of function declaration.
3. Passing functions around and how to use passed in functions.
A: I recommend Jeremy Keith's books: DOM Scripting and Bulletproof Ajax. After you become more fluent in JS I would recommend a JS library(I use jQuery, but that is not important).
JS is important to learn. You cannot use a framework without the proper understanding of how it works. That is doing things backwards.
A: i thing you should have a good knoloedge base of language specification and DOM (Document Object Model). it means,you shoud know how find/create "page object" an edit properties. Also you should have an idea of "object oriented" javascript tecniques, which the starting point of a lot of framework. you don't need learn specific framework if you don't use it. simple keep in mind generic base concept!
A: I'll go with the opposite answer most are putting out there. Learning javascript as a developer these days is almost pointless. The language is similar enough to java/C# that it's syntax and semantics shouldn't be lost on you.
What you should learn is jQuery.
As you use jQuery you'll pick up the most common things you'll ever need from javascript anyway.
A: If you're involved with the Web in anyway then the answer is "Yes, always". Maybe an embedded or system's programmer could get by without JavaScript, but not a webdev.
Most of the libraries are designed to alleviate some of the pain of interacting with a multitude of browsers. They will not abstract away core JavaScript functionality.
A: Yes, you should learn JavaScript. Sooner or later you will need to use it!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: count (non-blank) lines-of-code in bash In Bash, how do I count the number of non-blank lines of code in a project?
A: cat 'filename' | grep '[^ ]' | wc -l
should do the trick just fine
A: #!/bin/bash
find . -path './pma' -prune -o -path './blog' -prune -o -path './punbb' -prune -o -path './js/3rdparty' -prune -o -print | egrep '\.php|\.as|\.sql|\.css|\.js' | grep -v '\.svn' | xargs cat | sed '/^\s*$/d' | wc -l
The above will give you the total count of lines of code (blank lines removed) for a project (current folder and all subfolders recursively).
In the above "./blog" "./punbb" "./js/3rdparty" and "./pma" are folders I blacklist as I didn't write the code in them. Also .php, .as, .sql, .css, .js are the extensions of the files being looked at. Any files with a different extension are ignored.
A: grep -cvE '(^\s*[/*])|(^\s*$)' foo
-c = count
-v = exclude
-E = extended regex
'(comment lines) OR (empty lines)'
where
^ = beginning of the line
\s = whitespace
* = any number of previous characters or none
[/*] = either / or *
| = OR
$ = end of the line
I post this becaus other options gave wrong answers for me. This worked with my java source, where comment lines start with / or * (i use * on every line in multi-line comment).
A: awk '/^[[:space:]]*$/ {++x} END {print x}' "$testfile"
A: There are many ways to do this, using common shell utilities.
My solution is:
grep -cve '^\s*$' <file>
This searches for lines in <file> the do not match (-v) lines that match the pattern (-e) '^\s*$', which is the beginning of a line, followed by 0 or more whitespace characters, followed by the end of a line (ie. no content other then whitespace), and display a count of matching lines (-c) instead of the matching lines themselves.
An advantage of this method over methods that involve piping into wc, is that you can specify multiple files and get a separate count for each file:
$ grep -cve '^\s*$' *.hh
config.hh:36
exceptions.hh:48
layer.hh:52
main.hh:39
A: If you want to use something other than a shell script, try CLOC:
cloc counts blank lines, comment
lines, and physical lines of source
code in many programming languages. It
is written entirely in Perl with no
dependencies outside the standard
distribution of Perl v5.6 and higher
(code from some external modules is
embedded within cloc) and so is quite
portable.
A: Here's a Bash script that counts the lines of code in a project. It traverses a source tree recursively, and it excludes blank lines and single line comments that use "//".
# $excluded is a regex for paths to exclude from line counting
excluded="spec\|node_modules\|README\|lib\|docs\|csv\|XLS\|json\|png"
countLines(){
# $total is the total lines of code counted
total=0
# -mindepth exclues the current directory (".")
for file in `find . -mindepth 1 -name "*.*" |grep -v "$excluded"`; do
# First sed: only count lines of code that are not commented with //
# Second sed: don't count blank lines
# $numLines is the lines of code
numLines=`cat $file | sed '/\/\//d' | sed '/^\s*$/d' | wc -l`
# To exclude only blank lines and count comment lines, uncomment this:
#numLines=`cat $file | sed '/^\s*$/d' | wc -l`
total=$(($total + $numLines))
echo " " $numLines $file
done
echo " " $total in total
}
echo Source code files:
countLines
echo Unit tests:
cd spec
countLines
Here's what the output looks like for my project:
Source code files:
2 ./buildDocs.sh
24 ./countLines.sh
15 ./css/dashboard.css
53 ./data/un_population/provenance/preprocess.js
19 ./index.html
5 ./server/server.js
2 ./server/startServer.sh
24 ./SpecRunner.html
34 ./src/computeLayout.js
60 ./src/configDiff.js
18 ./src/dashboardMirror.js
37 ./src/dashboardScaffold.js
14 ./src/data.js
68 ./src/dummyVis.js
27 ./src/layout.js
28 ./src/links.js
5 ./src/main.js
52 ./src/processActions.js
86 ./src/timeline.js
73 ./src/udc.js
18 ./src/wire.js
664 in total
Unit tests:
230 ./ComputeLayoutSpec.js
134 ./ConfigDiffSpec.js
134 ./ProcessActionsSpec.js
84 ./UDCSpec.js
149 ./WireSpec.js
731 in total
Enjoy! --Curran
A: cat foo.c | sed '/^\s*$/d' | wc -l
And if you consider comments blank lines:
cat foo.pl | sed '/^\s*#/d;/^\s*$/d' | wc -l
Although, that's language dependent.
A: This command count number of non-blank lines. cat fileName | grep -v ^$ | wc -l grep -v ^$ regular expression function is ignore blank lines.
A: The neatest command is
grep -vc ^$ fileName
with -c option, you don't even need wc -l
A: 'wc' counts lines, words, chars, so to count all lines (including blank ones) use:
wc *.py
To filter out the blank lines, you can use grep:
grep -v '^\s*$' *.py | wc
'-v' tells grep to output all lines except those that match
'^' is the start of a line
'\s*' is zero or more whitespace characters
'$' is the end of a line
*.py is my example for all the files you wish to count (all python files in current dir)
pipe output to wc. Off you go.
I'm answering my own (genuine) question. Couldn't find an stackoverflow entry that covered this.
A: cat file.txt | awk 'NF' | wc -l
A: It's kinda going to depend on the number of files you have in the project. In theory you could use
grep -c '.' <list of files>
Where you can fill the list of files by using the find utility.
grep -c '.' `find -type f`
Would give you a line count per file.
A: Script to recursively count all non-blank lines with a certain file extension in the current directory:
#!/usr/bin/env bash
(
echo 0;
for ext in "$@"; do
for i in $(find . -name "*$ext"); do
sed '/^\s*$/d' $i | wc -l ## skip blank lines
#cat $i | wc -l; ## count all lines
echo +;
done
done
echo p q;
) | dc;
Sample usage:
./countlines.sh .py .java .html
A: If you want the sum of all non-blank lines for all files of a given file extension throughout a project:
while read line
do grep -cve '^\s*$' "$line"
done < <(find $1 -name "*.$2" -print) | awk '{s+=$1} END {print s}'
First arg is the project's base directory, second is the file extension. Sample usage:
./scriptname ~/Dropbox/project/src java
It's little more than a collection of previous solutions.
A: rgrep . | wc -l
gives the count of non blank lines in the current working directory.
A: grep -v '^\W*$' `find -type f` | grep -c '.' > /path/to/lineCountFile.txt
gives an aggregate count for all files in the current directory and its subdirectories.
HTH!
A: This gives the count of number of lines without counting the blank lines:
grep -v ^$ filename wc -l | sed -e 's/ //g'
A: Try this one:
> grep -cve ^$ -cve '^//' *.java
it's easy to memorize and it also excludes blank lines and commented lines.
A: There's already a program for this on linux called 'wc'.
Just
wc -l *.c
and it gives you the total lines and the lines for each file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "168"
}
|
Q: Send data between two PHP scripts I want to have a PHP script send a XML formatted string to another PHP script that resides on a different server in a different part of town.
Is there any nice, clean way of doing this?
(PHP5 and all the latest software available)
A: check out cURL for posting data between pages.
A: If it were me, I would just POST the xml data to the other script. You could use a socket from PHP, or use CURL. I think that's the cleanest solution, although SOAP is also viable if you don't mind the overhead of the SOAP request, as well as using a library.
A: I strongly suggest rolling your own RESTful API and avoiding the complexity of SOAP altogether. All you need is the curl extension to handle the HTTP request/response, and simple_xml to build/process the XML. If your data is in a reasonable format, it should be easy for you to push it into an XML string and submit it as a POST to the other server. That server will respond to the request by reading the XML string from the POST var back into an object, and voila! It shouldn't take you all day to whip this out.
A: XML-RPC or SOAP or just a RESTful API
A: You can use cURL (complex API), the http extension (cleaner), or if you need to do more complex stuff you can even use the Scriptable Browser from simpletest.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Getting a vector into a function that expects a vector Consider these classes.
class Base
{
...
};
class Derived : public Base
{
...
};
this function
void BaseFoo( std::vector<Base*>vec )
{
...
}
And finally my vector
std::vector<Derived*>derived;
I want to pass derived to function BaseFoo, but the compiler doesn't let me. How do I solve this, without copying the whole vector to a std::vector<Base*>?
A: one option is to use a template
template<typename T>
void BaseFoo( const std::vector<T*>& vec)
{
...
}
The drawback is that the implementation has to be in the header and you will get a little code bloat. You will wind up with different functions being instantiated for each type, but the code stays the same. Depending on the use case it's a quick and dirty solution.
Edit, I should note the reason we need a template here is because we are trying to write the same code for unrelated types as noted by several other posters. Templates allow you do solve these exact problems. I also updated it to use a const reference. You should also pass "heavy" objects like a vector by const reference when you don't need a copy, which is basically always.
A: vector<Base*> and vector<Derived*> are unrelated types, so you can't do this. This is explained in the C++ FAQ here.
You need to change your variable from a vector<Derived*> to a vector<Base*> and insert Derived objects into it.
Also, to avoid copying the vector unnecessarily, you should pass it by const-reference, not by value:
void BaseFoo( const std::vector<Base*>& vec )
{
...
}
Finally, to avoid memory leaks, and make your code exception-safe, consider using a container designed to handle heap-allocated objects, e.g:
#include <boost/ptr_container/ptr_vector.hpp>
boost::ptr_vector<Base> vec;
Alternatively, change the vector to hold a smart pointer instead of using raw pointers:
#include <memory>
std::vector< std::shared_ptr<Base*> > vec;
or
#include <boost/shared_ptr.hpp>
std::vector< boost::shared_ptr<Base*> > vec;
In each case, you would need to modify your BaseFoo function accordingly.
A: Instead of passing the container object (vector<>), pass in begin and end iterators like the rest of the STL algorithms. The function that receives them will be templated, and it won't matter if you pass in Derived* or Base*.
A: Generally you would start with a container of base pointers, not the other way.
A: If you dealing with a third-party library, and this is your only hope, then you can do this:
BaseFoo (*reinterpret_cast<std::vector<Base *> *>(&derived));
Otherwise fix your code with one of the other suggesstions.
A: Taking Matt Price's answer from above, given that you know in advance what types you want to use with your function, you can declare the function template in the header file, and then add explicit instantiations for those types:
// BaseFoo.h
template<typename T>
void BaseFoo( const std::vector<T*>& vec);
// BaseFoo.cpp
template<typename T>
void BaseFoo( const std::vector<T*>& vec);
{
...
}
// Explicit instantiation means no need for definition in the header file.
template void BaseFoo<Base> ( const std::vector<Base*>& vec );
template void BaseFoo<Derived> ( const std::vector<Derived*>& vec );
A: This problem occurs in programming languages that have mutable containers. You cannot pass around a mutable bag of apples as a bag of fruit because you cannot be sure that someone else does not put a lemon into that bag of fruit, after which it no longer qualifies as a bag of apples. If the bag of apples were not mutable, passing it around as a bag of fruit would be fine. Search for covariance/contravariance.
A: If std::vector supported what you're asking for, then it would be possible to defeat the C++ type system without using any casts (edit: ChrisN's link to the C++ FAQ Lite talks about the same issue):
class Base {};
class Derived1 : public Base {};
class Derived2 : public Base {};
void pushStuff(std::vector<Base*>& vec) {
vec.push_back(new Derived2);
vec.push_back(new Base);
}
...
std::vector<Derived1*> vec;
pushStuff(vec); // Not legal
// Now vec contains a Derived2 and a Base!
Since your BaseFoo() function takes the vector by value, it cannot modify the original vector that you passed in, so what I wrote would not be possible. But if it takes a non-const reference and you use reinterpret_cast<std::vector<Base*>&>() to pass your std::vector<Derived*>, you might not get the result that you want, and your program might crash.
Java arrays support covariant subtyping, and this requires Java to do a runtime type check every time you store a value in an array. This too is undesirable.
A: They are unrelated types -- you can't.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Is a Python dictionary an example of a hash table? One of the basic data structures in Python is the dictionary, which allows one to record "keys" for looking up "values" of any type. Is this implemented internally as a hash table? If not, what is it?
A: To expand upon nosklo's explanation:
a = {}
b = ['some', 'list']
a[b] = 'some' # this won't work
a[tuple(b)] = 'some' # this will, same as a['some', 'list']
A: There must be more to a Python dictionary than a table lookup on hash(). By brute experimentation I found this hash collision:
>>> hash(1.1)
2040142438
>>> hash(4504.1)
2040142438
Yet it doesn't break the dictionary:
>>> d = { 1.1: 'a', 4504.1: 'b' }
>>> d[1.1]
'a'
>>> d[4504.1]
'b'
Sanity check:
>>> for k,v in d.items(): print(hash(k))
2040142438
2040142438
Possibly there's another lookup level beyond hash() that avoids collisions between dictionary keys. Or maybe dict() uses a different hash.
(By the way, this in Python 2.7.10. Same story in Python 3.4.3 and 3.5.0 with a collision at hash(1.1) == hash(214748749.8).)
(I haven't found any collisions in Python 3.9.6. Since the hashes are bigger -- hash(1.1) == 230584300921369601 -- I estimate it would take my desktop a thousand years to find one. So I'll get back to you on this.)
A: Yes, it is a hash mapping or hash table. You can read a description of python's dict implementation, as written by Tim Peters, here.
That's why you can't use something 'not hashable' as a dict key, like a list:
>>> a = {}
>>> b = ['some', 'list']
>>> hash(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list objects are unhashable
>>> a[b] = 'some'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list objects are unhashable
You can read more about hash tables or check how it has been implemented in python and why it is implemented that way.
A: Yes. Internally it is implemented as open hashing based on a primitive polynomial over Z/2 (source).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "256"
}
|
Q: Scatter/gather async socket I/O in .NET I'm trying to use the Stream.BeginWrite Async I/O API in .NET for a high-throughput situation with many short messages. As such, a scatter/gather API will reduce the number of context switches (and CPU usage) tremendously. Does this API use the LPBUFFERS Win32 API at all? Is there an alternative API for Scatter/Gather I/O?
A: Looking at the .net sources, the accepted answer seems to be wrong.
SocketAsyncEventArgs has a BufferList attribute. When that is used, instead of the Buffer attribute that can only hold a single contiguous block of memory, operations can make use of scatter/gather DMA, as Socket.SendAsync(SocketAsyncEventArgs) uses WSASend internally, that
allows multiple send buffers to be specified making it applicable to the scatter/gather type of I/O
and Socket.SendAsync(SocketAsyncEventArgs) uses WSARecv, that
allows multiple receive buffers to be specified making it applicable to the scatter/gather type of I/O
I don't have the .net 3.5 sources handy, but BufferList exists since .net 3.5, so scatter/gather might have been supported since .net 3.5. The minimum OS requirements for WSASend and WSARecv exist are documented as Windows Vista / Server 2003.
N.B. I don't know what stream you are using, but NetworkStream.BeginWrite sends a single buffer to the WSASend, so you cannot use that for scatter/gathering.
A: I would be surprised if you could get to the scatter/gather api's from the BCL (it's for the l33t w1n32 haxx0rz, you know?), but there's always P/Invoke (which is suprisingly easy to use, I've found).
A: If you want to dig into the guts of the framework, there are a few ways to do it:
1) Reflector
2) MS recently opened up the source for debugging purposes, you can step into it with VS2008 if you enable the option under Debugging/Options/General
3) Koders.com seem to be hosting the framework source too:
http://www.koders.com/csharp/fidCE09E83BE706D0BD370658C3785E82D3A13FC2CE.aspx?s=flush()#L109
A: There is no way to do socket scatter/gather I/O in .NET. According to a MSFT blog post, there may be a similar API in .NET 4.5 (whatever that is...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I bind the result of DataTable.Select() to a ListBox control? I have the following code:
ListBox.DataSource = DataSet.Tables("table_name").Select("some_criteria = match")
ListBox.DisplayMember = "name"
The DataTable.Select() method returns an array of System.Data.DataRow objects.
No matter what I specify in the ListBox.DisplayMember property, all I see is the ListBox with the correct number of items all showing as System.Data.DataRow instead of the value I want which is in the "name" column!
Is it possible to bind to the resulting array from DataTable.Select(), instead of looping through it and adding each one to the ListBox?
(I've no problem with looping, but doesn't seem an elegant ending!)
A: Use a DataView instead.
ListBox.DataSource = new DataView(DataSet.Tables("table_name"), "some_criteria = match", "name", DataViewRowState.CurrentRows);
ListBox.DisplayMember = "name"
A: Josh has it right with the DataView. If you need a very large hammer, you can take the array of rows from any DataTable.Select("...") and do a merge into a different DataSet.
DataSet copy = new DataSet();
copy.Merge(myDataTable.Select("Foo='Bar'"));
// copy.Tables[0] has a clone
That approach for what you're trying to do is most probably overkill but there are instances when you may need to get a datatable out of an array of rows where it's helpful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: How to prevent creating intermediate objects in cascading operators? I use a custom Matrix class in my application, and I frequently add multiple matrices:
Matrix result = a + b + c + d; // a, b, c and d are also Matrices
However, this creates an intermediate matrix for each addition operation. Since this is simple addition, it is possible to avoid the intermediate objects and create the result by adding the elements of all 4 matrices at once. How can I accomplish this?
NOTE: I know I can define multiple functions like Add3Matrices(a, b, c), Add4Matrices(a, b, c, d), etc. but I want to keep the elegancy of result = a + b + c + d.
A: You could limit yourself to a single small intermediate by using lazy evaluation. Something like
public class LazyMatrix
{
public static implicit operator Matrix(LazyMatrix l)
{
Matrix m = new Matrix();
foreach (Matrix x in l.Pending)
{
for (int i = 0; i < 2; ++i)
for (int j = 0; j < 2; ++j)
m.Contents[i, j] += x.Contents[i, j];
}
return m;
}
public List<Matrix> Pending = new List<Matrix>();
}
public class Matrix
{
public int[,] Contents = { { 0, 0 }, { 0, 0 } };
public static LazyMatrix operator+(Matrix a, Matrix b)
{
LazyMatrix l = new LazyMatrix();
l.Pending.Add(a);
l.Pending.Add(b);
return l;
}
public static LazyMatrix operator+(Matrix a, LazyMatrix b)
{
b.Pending.Add(a);
return b;
}
}
class Program
{
static void Main(string[] args)
{
Matrix a = new Matrix();
Matrix b = new Matrix();
Matrix c = new Matrix();
Matrix d = new Matrix();
a.Contents[0, 0] = 1;
b.Contents[1, 0] = 4;
c.Contents[0, 1] = 9;
d.Contents[1, 1] = 16;
Matrix m = a + b + c + d;
for (int i = 0; i < 2; ++i)
{
for (int j = 0; j < 2; ++j)
{
System.Console.Write(m.Contents[i, j]);
System.Console.Write(" ");
}
System.Console.WriteLine();
}
System.Console.ReadLine();
}
}
A: In C++ it is possible to use Template Metaprograms and also here, using templates to do exactly this. However, the template programing is non-trivial. I don't know if a similar technique is available in C#, quite possibly not.
This technique, in c++ does exactly what you want. The disadvantage is that if something is not quite right then the compiler error messages tend to run to several pages and are almost impossible to decipher.
Without such techniques I suspect you are limited to functions such as Add3Matrices.
But for C# this link might be exactly what you need: Efficient Matrix Programming in C# although it seems to work slightly differently to C++ template expressions.
A: Something that would at least avoid the pain of
Matrix Add3Matrices(a,b,c) //and so on
would be
Matrix AddMatrices(Matrix[] matrices)
A: You can't avoid creating intermediate objects.
However, you can use expression templates as described here to minimise them and do fancy lazy evaluation of the templates.
At the simplest level, the expression template could be an object that stores references to several matrices and calls an appropriate function like Add3Matrices() upon assignment. At the most advanced level, the expression templates will do things like calculate the minimum amount of information in a lazy fashion upon request.
A: This is not the cleanest solution, but if you know the evaluation order, you could do something like this:
result = MatrixAdditionCollector() << a + b + c + d
(or the same thing with different names). The MatrixCollector then implements + as +=, that is, starts with a 0-matrix of undefined size, takes a size once the first + is evaluated and adds everything together (or, copies the first matrix). This reduces the amount of intermediate objects to 1 (or even 0, if you implement assignment in a good way, because the MatrixCollector might be/contain the result immediately.)
I am not entirely sure if this is ugly as hell or one of the nicer hacks one might do. A certain advantage is that it is kind of obvious what's happening.
A: I thought that you could just make the desired add-in-place behavior explicit:
Matrix result = a;
result += b;
result += c;
result += d;
But as pointed out by Doug in the Comments on this post, this code is treated by the compiler as if I had written:
Matrix result = a;
result = result + b;
result = result + c;
result = result + d;
so temporaries are still created.
I'd just delete this answer, but it seems others might have the same misconception, so consider this a counter example.
A: Might I suggest a MatrixAdder that behaves much like a StringBuilder. You add matrixes to the MatrixAdder and then call a ToMatrix() method that would do the additions for you in a lazy implementation. This would get you the result you want, could be expandable to any sort of LazyEvaluation, but also wouldn't introduce any clever implementations that could confuse other maintainers of the code.
A: Bjarne Stroustrup has a short paper called Abstraction, libraries, and efficiency in C++ where he mentions techniques used to achieve what you're looking for. Specifically, he mentions the library Blitz++, a library for scientific calculations that also has efficient operations for matrices, along with some other interesting libraries. Also, I recommend reading a conversation with Bjarne Stroustrup on artima.com on that subject.
A: It is not possible, using operators.
A: My first solution would be something along this lines (to add in the Matrix class if possible) :
static Matrix AddMatrices(Matrix[] lMatrices) // or List<Matrix> lMatrices
{
// Check consistency of matrices
Matrix m = new Matrix(n, p);
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
foreach (Maxtrix mat in lMatrices)
m[i, j] += mat[i, j];
return m;
}
I'd had it in the Matrix class because you can rely on the private methods and properties that could be usefull for your function in case the implementation of the matrix change (linked list of non empty nodes instead of a big double array, for example).
Of course, you would loose the elegance of result = a + b + c + d. But you would have something along the lines of result = Matrix.AddMatrices(new Matrix[] { a, b, c, d });.
A: There are several ways to implement lazy evaluation to achieve that. But its important to remember that not always your compiler will be able to get the best code of all of them.
I already made implementations that worked great in GCC and even superceeded the performance of the traditional several For unreadable code because they lead the compiler to observe that there were no aliases between the data segments (somethign hard to grasp with arrays coming of nowhere). But some of those were a complete fail at MSVC and vice versa on other implementations. Unfortunately those are too long to post here (don't think several thousands lines of code fit here).
A very complex library with great embedded knowledge int he area is Blitz++ library for scientific computation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What is the best way to pack JavaScript code without getting performance flaws? I am searching for a way to compress JavaScript code for the iPhone. Is there a way to avoid using a lot of CPU time on the small and rather slow device?
A: Use the YUI Compressor
A: Use JSMin and avoid packer which is really more CPU consuming and slower to "deflate"
A: I love ShrinkSafe. It interprets your code in Rhino, then it returns compressed code. Because it's operating on real interpreted code (instead of complex string evaluations) it will never munge code or fail to find differences between public and private variables.
It's a tool of excellent quality.
A: We've used js_compactor and JavaScriptLint to "compile" and compress our JavaScript in our automated build process. A further build step would take the compress JavaScript and combine related files into a single package. The performance boost was significant, but be aware that you are away trading the ability to debug.
Reducing the number of files transmitted to the client will gives you a big performance boost when there are more than a few files. Typically, browsers will only open 2 connections to a single server at a time, so even if you are transmitting compressed and minimized files the browser spends a significant amount of overhead checking its cache. yslow helped us identify why pages were taking a long time to load and help us focus our optimization efforts. We instrumented our environment to either use the raw files or the minimized and compressed versions.
A: I believe Safari on the iPhone supports gzip output so you could use something like mod_deflate. I've had the best results using this method. Quite a bit of the JavaScript compression stuff out there is absolute garbage and takes longer to decompress than it does to download the larger file. JSMin looks pretty good, though.
A: You can try different tools at The JavaScript CompressorRater. All tools except packer have no impact on how fast the javascript executes as far as I know - they only removes whitespaces, renames variables and such.
I myself considers YUI Compressor to be the best one.
It's always useful to validate the code in JSLint first to be sure that the compressor understands it correctly.
A: Making sure your webserver properly serves stuff gzipped/deflated when the client supports it is usually more effective than minifying the program code itself. Of course, using both tends to give even smaller sizes.
A: I just went through this little dance in the last few days. We tried using Packer, but found that our packed JavaScript was taking over 2 seconds to execute (not to mention blocking other downloads). Based on this article we've switched to YUI Compressor. Not only are our gzipped file sizes smaller, execution times are under 300 ms.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How expensive are JS function calls (compared to allocating memory for a variable)? Given some JS code like that one here:
for (var i = 0; i < document.getElementsByName('scale_select').length; i++) {
document.getElementsByName('scale_select')[i].onclick = vSetScale;
}
Would the code be faster if we put the result of getElementsByName into a variable before the loop and then use the variable after that?
I am not sure how large the effect is in real life, with the result from getElementsByName typically having < 10 items. I'd like to understand the underlying mechanics anyway.
Also, if there's anything else noteworthy about the two options, please tell me.
A: Caching the property lookup might help some, but caching the length of the array before starting the loop has proven to be faster.
So declaring a variable in the loop that holds the value of the scale_select.length would speed up the entire loop some.
var scale_select = document.getElementsByName('scale_select');
for (var i = 0, al=scale_select.length; i < al; i++)
scale_select[i].onclick = vSetScale;
A: A smart implementation of DOM would do its own caching, invalidating the cache when something changes. But not all DOMs today can be counted on to be this smart (cough IE cough) so it's best if you do this yourself.
A: Definitely. The memory required to store that would only be a pointer to a DOM object and that's significantly less painful than doing a DOM search each time you need to use something!
Idealish code:
var scale_select = document.getElementsByName('scale_select');
for (var i = 0; i < scale_select.length; i++)
scale_select[i].onclick = vSetScale;
A:
In principle, would the code be faster if we put the result of getElementsByName into a variable before the loop and then use the variable after that?
yes.
A: Use variables. They're not very expensive in JavaScript and function calls are definitely slower. If you loop at least 5 times over document.getElementById() use a variable. The idea here is not only the function call is slow but this specific function is very slow as it tries to locate the element with the given id in the DOM.
A: There's no point storing the scaleSelect.length in a separate variable; it's actually already in one - scaleSelect.length is just an attribute of the scaleSelect array, and as such it's as quick to access as any other static variable.
A: I think so. Everytime it loops, the engine needs to re-evaluate the document.getElementsByName statement.
On the other hand, if the the value is saved in a variable, than it allready has the value.
A: @ Oli
Caching the length property of the elements fetched in a variable is also a good idea:
var scaleSelect = document.getElementsByName('scale_select');
var scaleSelectLength = scaleSelect.length;
for (var i = 0; i < scaleSelectLength; i += 1)
{
// scaleSelect[i]
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: How to determine the value of socket listen() backlog parameter? How should I determine what to use for a listening socket's backlog parameter? Is it a problem to simply specify a very large number?
A: I second using SOMAXCONN, unless you have a specific reason to use a short queue.
Keep in mind that if there is no room in the queue for a new connection, no RST will be sent, allowing the client to automatically continue trying to connect by retransmitting SYN.
Also, the backlog argument can have different meanings in different socket implementations.
*
*In most it means the size of the half-open connection queue, in some it means the size of the completed connection queue.
*In many implementations, the backlog argument will multiplied to yield a different queue length.
*If a value is specified that is too large, all implementations will silently truncate the value to maximum queue length anyways.
A: There's a very long answer to this in the Winsock Programmer's FAQ. It details the standard setting, and the dynamic backlog feature added in a hotfix to NT 4.0.
A: From the docs:
A value for the backlog of SOMAXCONN is a special constant that instructs the underlying service provider responsible for socket s to set the length of the queue of pending connections to a maximum reasonable value.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
}
|
Q: Is there a way to install gcc in OSX without installing Xcode? I've googled the hell out of it, and it seems like there is no way to install gcc on OS X without installing Xcode (which takes at leats 1.5GB of space). All I need is gcc and none of the other junk that comes with Xcode. And at this point, I'll take any other kind of C compiler.
I know I could simply install Xcode, but that is beside the point since I neither have my original installation disc nor a quick internet connection.
So... does anyone have any suggestions?
EDIT: Sorry if I was unclear, but I need the headers as well. I'm currently installing gcc4 via fink and it's downloading the shared libraries as well. I'll update on the progress.
EDIT 2: Ok, so I successfully installed gcc using fink. BUT, it's pretty much useless: "error: C compiler cannot create executables". After googling around, I found that not having Apple's Developer Tools installed is the cause of the error. Probably because I need all the libraries, headers, etc that are only available through Xcode.
A: I've been doing this for a long time, and I've done things like this, and I've concluded it's simply never worth doing. :-(
The reason is that no one expects you to do such things, so there are assumptions all over the system that "everything" is there. You might not run into this today - or worse, you might not even realize later that this is the cause of your issues.
Instead of wasting your smart time on things like this which don't actually produce any working code you can use, following the approved method, run the download overnight, and spend your time instead on planning and writing the top-level code (you shouldn't need a compiler for that anyway!)
A: Checkout command line tools for Xcode from apple. It's official support from apple to only create the command line tools.
A: Try the osx-gcc-installer on github.
A: I'm fairly certain that this is not possible. However, I'm also not sure if you need the whole developer suite to get the developer tools installed. Quite a few tools get installed along with XCode that might be optional. However, I think you're out of luck for not needing to bite the bullet and use wget or DownThemAll or some other download manager that will let you slowly download the developer tools in chunks.
Whenever I install OS X I install the developer tools as a rule, just because it opens up the world of available software tremendously. Perhaps you should consider doing this in the future as well.
A: The first thing you want to try is called Pacifist - what Pacifist lets you do is to open a large package (such as XCode) and to access parts of it directly. I'm pretty sure you'll be able to find a smaller package inside the XCode package that just has gcc.
HOWEVER it's not clear to me that this is the best route. If you are planning to do Cocoa or Carbon developing I strongly suggest installing the entire package because you will need all the documentation and headers. If you're only planning on doing command-line stuff, you still may find you need to poke around inside XCode to identify all the packages you will need - things such as libraries, headers, man pages and so on.
All in all you're probably still better off installing the whole thing - if HD space is really tight (because you're on a tiny old iMac for example) then look at tools like Monolingual - Monolingual removes all the international support from all the various OS X applications, which can easily reduce the size of an application by 50%.
A: There's fink and MacPorts, if you want an easy installer/updater.
A: Install the GCC package from the Packages directory in Xcode's disk image and you'll have just GCC. Note that of course you won't have autotools or other standard build tools, for which you will have to install more packages from that folder.
A: I found this googling around that appears to install it without XCode.
A: install Command Line Tool separately.
refer to
*
*http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x
*http://osxdaily.com/2012/07/06/install-gcc-without-xcode-in-mac-os-x/
A: yes i could do it with port but you need at least to accept the code license.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
}
|
Q: QTVR-like Panorama in Flash/ActionScript? It has been a few years since I used Actionscript. Back in the day, I made a project that emulated a QTVR panorama (at the time I was using Flash, you could only embed very basic mov files) by simply moving a very long flattened pano image left or right behind a mask. The effect was okay, but not as nice as a real pano, since the perspective was so warped. So now that a couple iterations of Flash have been developed I am curious...
Is there a way now to get a bit closer to a real QTVR? ...or is it now possible to embed a real QTVR?
A: FlashPanoramas has worked great for me in the past. One of its newer features is the ability to directly load in QTVR files.
A: The best solution, one I am currently using on a day to day basis to present plots of land for sale on a website I developed is to use krpano. It is by far the best one out there -- flash-based, XML-powered, very customizable, supports plug-ins, hands down the best.
http://www.krpano.com/
It has a flash viewer and also all the tools you need to create the panoramas.
A: My advice would be - download PaperVision, cut your image into strips, then arrange these in ring as 3d planes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to pass an array parameter in TOAD Using toad and an oracle database, how can I call a sp and see the results by passing an array to one of the parameters of the sp?
A: In the Editor tab you can call it like this:
begin
myproc (my_array_type(1,4,7,9));
end;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do you fix "Too many open files" problem in Hudson? We use Hudson as a continuous integration system to execute automated builds (nightly and based on CVS polling) of a lot of our projects.
Some projects poll CVS every 15 minutes, some others poll every 5 minutes and some poll every hour.
Every few weeks we'll get a build that fails with the following output:
FATAL: java.io.IOException: Too many open files
java.io.IOException: java.io.IOException: Too many open files
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
The next build always worked (with 0 changes) so we always chalked it up to 2 build jobs being run at the same time and happening to have too many files open during the process.
This weekend we had a build fail Friday night (automatic nightly build) with the message and every other nightly build also failed. Somehow this triggered Hudson to continuously build every project which failed until the issue was resolved. This resulted in a build every 30 minutes or so of every project until sometime Saturday night when the issue magically disappeared.
A: This is Hudson issue 715 (http://issues.hudson-ci.org/browse/HUDSON-715). The current recommendation is to set the 'maximum number of simultaneous polling threads' to keep the polling activity down.
A: See https://wiki.jenkins-ci.org/display/JENKINS/I%27m+getting+too+many+open+files+error for what we need from you to fix this kind of problem.
A: Change system limits for per-process maximum open file descriptors? As in ulimit -n for the Java process?
A: I have experienced this problem with another Java application running on Debian, it went away when we downgraded to Java version 1.6.0.0. Java never closed unused connections, causing it to throw the exception.
A: One of the most common problem causing "Too many open files" is to have Active Directory plugin enabled and configured in Jenkins. There are known issues with this plugin which cause enormous number of threads to show up and "Too many open files" error in logs as well. After disabling it and switching to LDAP authentication I did not experience Jenkins to hang anymore.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: .NET Process.Start default directory? I'm firing off a Java application from inside of a C# .NET console application. It works fine for the case where the Java application doesn't care what the "default" directory is, but fails for a Java application that only searches the current directory for support files.
Is there a process parameter that can be set to specify the default directory that a process is started in?
A: The Process.Start method has an overload that takes an instance of ProcessStartInfo. This class has a property called "WorkingDirectory".
Set that property to the folder you want to use and that should make it start up in the correct folder.
A: Use the ProcessStartInfo.WorkingDirectory property to set it prior to starting the process. If the property is not set, the default working directory is %SYSTEMROOT%\system32.
You can determine the value of %SYSTEMROOT% by using:
string _systemRoot = Environment.GetEnvironmentVariable("SYSTEMROOT");
Here is some sample code that opens Notepad.exe with a working directory of %ProgramFiles%:
...
using System.Diagnostics;
...
ProcessStartInfo _processStartInfo = new ProcessStartInfo();
_processStartInfo.WorkingDirectory = @"%ProgramFiles%";
_processStartInfo.FileName = @"Notepad.exe";
_processStartInfo.Arguments = "test.txt";
_processStartInfo.CreateNoWindow = true;
Process myProcess = Process.Start(_processStartInfo);
There is also an Environment variable that controls the current working directory for your process that you can access directly through the Environment.CurrentDirectory property .
A: Use the ProcessStartInfo class and assign a value to the WorkingDirectory property.
A: Just a note after hitting my head trying to implement this.
Setting the WorkingDirectory value does not work if you have "UseShellExecute" set to false.
A: Yes!
ProcessStartInfo Has a property called WorkingDirectory, just use:
...
using System.Diagnostics;
...
var startInfo = new ProcessStartInfo();
startInfo.WorkingDirectory = // working directory
// set additional properties
Process proc = Process.Start(startInfo);
A: Use the ProcessStartInfo.WorkingDirectory property.
Docs here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
}
|
Q: Adding video to a site In your opinion, what are the best options for adding video to a website assuming it would be rendered as FLV. What are the key considerations?
Would you use a 3rd party service (youtube.com, vimeo.com, etc.) or host yourself? Why?
If you used a service, which one? If you hosted yourself is it as simple as using an existing embeddable flash FLV player to access FLV files via HTTP or is there something more you would do in terms of content management, etc.?
A: I guess the question boils down to whether you need to be in complete control of the video, and whether you have money to throw at the project. If you host on youtube etc you are subject to their terms of service and need to work within the constraints of their branding.
When I have needed complete control of Flash video clips for clients I have used the JW-FLV player. It will happily serve FLV files off an HTTP server. It is possible to embed the player in another Flash movie, but most often you will control the playlist from HTML links. Hosting video files can get very expensive, so expect to pay a hefty bandwidth bill.
I would use a 3rd party service if I was creating video for public consumption that had some sort of marketing aspect to it. Host it on YouTube and you can get very good exposure, and people have a chance of finding your video. These services also have global reach in their networks so you may get better performance worldwide.
Google recently released Video for Google Apps customers. This allows you to secure your Google video to users belonging to your organisation. This bridges the gap for some projects that would traditionally use self-hosting.
A: Whether you decide to host the video yourself depends greatly on your requirements, hosting environment and technology you use. If it's a small personal site, than it's prefectly ok to host it on youtube or another hosting service, but if you are making a corporate site, it looks much more professional if you host it yourself. Or if the video won't change very frequently it's pretty easy to just host it yourself.
To host it yourself it's just simply puting in a web accessible directory on the server and setting the URL in the player.
If you need to do content management, than keep in mind the possible upload limits you will have on the server, and the fact, that HTTP is not the ideal protocol for uploading large files.
If you have to recode the video on the server, than don't forget that it will be a serious performance hit to it while the encoding is running.
To recode the video on the server I prefer to use FFMPEG or mencoder (both have windows and linux/unix versions).
A: If you are going to use a 3rd party site, use vimeo - it's a great user experience and great video quality.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Vertical Scrolling Marquee for foxpro Could anyone could point me to some code/give me ideas on how to create a smooth scrolling vertical marquee for VFP 8 or 9?
Any help is appreciated.
A: You can use Scrollable Container
A: Here's a quick program that will scroll messages. Put the following in a prg file and run it.
I'd make a containerScrollArea a class that encapsulates the timer, labels, and scrolling code. Give it GetNextMessage method that you can override to retrieve the messages.
* Put a container on the screen to hold our scroller
_screen.AddObject("containerScrollArea", "container")
WITH _Screen.containerScrollArea
* Size it
.Visible = .t.
.Width = 100
.Height = 100
* Add two labels, one to hold each scrolling message
.AddObject("labelScroll1", "Label")
.AddObject("labelScroll2", "Label")
* This timer will move the labels to scroll them
.AddObject("timerScroller", "ScrollTimer")
ENDWITH
WITH _Screen.containerScrollArea.labelScroll1
* The labels are positioned below the margin of the container, so they're not initially visible
.Top = 101
.Height = 100
.Visible = .t.
.WordWrap = .t.
.BackStyle= 0
.Caption = "This is the first scrolling text, which is scrolling."
ENDWITH
WITH _Screen.containerScrollArea.labelScroll2
* The labels are positioned below the margin of the container, so they're not initially visible
.Top = 200
.Height = 100
.Visible = .t.
.WordWrap = .t.
.BackStyle= 0
.Caption = "This is the second scrolling text, which is scrolling."
ENDWITH
* Start the timer, which scrolls the labels
_Screen.containerScrollArea.timerScroller.Interval = 100
DEFINE CLASS ScrollTimer AS Timer
PROCEDURE Timer
* If the first label is still in view, move it by one pixel
IF This.Parent.labelScroll1.Top > -100
This.Parent.labelScroll1.Top = This.Parent.labelScroll1.Top - 1
ELSE
* If the first label has scrolled out of view on the top of the container, move it back to the bottom.
This.Parent.labelScroll1.Top = 101
* Load some new text here
ENDIF
IF This.Parent.labelScroll2.Top > -100
* If the second label is still in view, move it by one pixel
This.Parent.labelScroll2.Top = This.Parent.labelScroll2.Top - 1
ELSE
* If the second label has scrolled out of view on the top of the container, move it back to the bottom.
This.Parent.labelScroll2.Top = 101
* Load some new text here
ENDIF
ENDPROC
ENDDEFINE
A: Unfortunately the nature of my work leaves me no time for fooling around with graphics, however if I did I would look into using GDI+ with VFP. Here is an article to get you started
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to get email and their attachments from PHP I'm writing a photo gallery webapp for a friend's wedding and they want a photo gallery for guests to submit the digital photos they take on the day.
After evaluating all the options, I've decided the easiest thing for users would be to let them use a familiar interface (their email) and just have them send in the pictures as attachments.
I've created an mailbox but now I need to connect and retrieve these attachments for automated processing for adding to the gallery system. But how? Are there any tutorials or prefab classes you've seen for doing this?
A: If you're creating a dedicated mailbox for this purpose, using a filtering mechanism is almost definitely not what you want. Instead, you want to have the mailbox be a pipe to the application, and have the application simply read in the message from stdin, parse out the body, and MIME parse the body to get the attachments.
Having a mailbox be a pipe is supported by all the popular unix-based MTAs that I know of, such as sendmail, postfix, and qmail. Generally you define it in your aliases file, like so:
# sendmail or postfix syntax
msgsubmit: "| /usr/bin/php ~path/to/example.php"
Then mails to msgsubmit@ get routed to a php program for delivery.
This has the advantage of not relying on an IMAP server or any other server beyond the MTA being alive, and it works fine as long as you have control over the MTA of the destination host. Filtering is what you'd want if you wanted all messages on a system to be inspected by the script, which I'm guessing is not the case.
If you want a copy kept in a mailbox somewhere (not a bad idea) simply define the alias to go to multiple addresses, like so:
msgsubmit: "| /usr/bin/php ~path/to/example.php", msgsubmit-box
Or postfix virtual format:
msgsubmit
"| /usr/bin/php ~path/to/example.php"
msgsubmit-box
A: Have you considered using Google's Picasa Web Albums?
You can set up an email address to send photos to and share them online.
You can then get an RSS feed of these photos, which most programmers are
more familiar with than MTAs.
A: What MTA are you using? If you use postfix + maildrop you can create a filtering rule that pipes certain messages through a PHP script that then handles the incoming mails. (google for maildrop and xfilter).
A: I used to do a lot of this before, but I can't find the code, here's a scaled down version I found. It should put you on the correct path. I used to run this type of script from a cronjob. Sorry I can't find the final version. ;(
// Open pop mailbox
if (!$mbox = imap_open ("{localhost:110/pop3/notls}INBOX", "user", "tester")) {
die ('Cannot connect/check pop mail! Exiting');
}
if ($hdr = imap_check($mbox)) {
$msgCount = $hdr->Nmsgs;
} else {
echo "Failed to get mail";
exit;
}
$MN=$msgCount;
$overview=imap_fetch_overview($mbox,"1:$MN",0);
for ($X = 1; $X <= $MN; $X++) {
$file = imap_fetchbody($mbox, $X, 1);
imap_delete($mbox, $X);
}
imap_expunge($mbox);
imap_close($mbox);
Good luck!
A: I think you want a MIME message parser.
I've used this one before and it seems to work fine, although I haven't tested it on really big attachments (i.e. 2-3MB files you might get from digital cameras).
Have you already got a system for reading POP3 / IMAP mailboxes? There is another class on the same site which also works on POP3 (I believe there is also an IMAP one) - however if you will be downloading a fair volume maybe you'll want to investigate a few C-based solutions as I believe that one is pure PHP.
A: Majordomo, could be an alternative to handle emails, but there are some limitations on file attachment handling.
A: <?php
//make sure that submit button name is 'Submit'
if(isset($_POST['Submit'])){
$name = $_POST['visitorname'];
$email = $_POST['visitoremail'];
$message = $_POST['visitormessage'];
$to="youremail@yourdomain.com";
$subject="From ".$name;
$from = $email;
// generate a random string to be used as the boundary marker
$mime_boundary="==Multipart_Boundary_x".md5(mt_rand())."x";
// now we'll build the message headers
$headers = "From: $from\r\n" .
"MIME-Version: 1.0\r\n" .
"Content-Type: multipart/mixed;\r\n" .
" boundary=\"{$mime_boundary}\"";
// next, we'll build the invisible portion of the message body
// note that we insert two dashes in front of the MIME boundary
// when we use it
$message = "This is a multi-part message in MIME format.\n\n" .
"--{$mime_boundary}\n" .
"Content-Type: text/plain; charset=\"iso-8859-1\"\n" .
"Content-Transfer-Encoding: 7bit\n\n" .
$message . "\n\n";
foreach($_FILES as $userfile)
{
// store the file information to variables for easier access
$tmp_name = $userfile['tmp_name'];
$type = $userfile['type'];
$name = $userfile['name'];
$size = $userfile['size'];
// if the upload succeded, the file will exist
if (file_exists($tmp_name))
{
// check to make sure that it is an uploaded file and not a system file
if(is_uploaded_file($tmp_name))
{
// open the file for a binary read
$file = fopen($tmp_name,'rb');
// read the file content into a variable
$data = fread($file,filesize($tmp_name));
// close the file
fclose($file);
// now we encode it and split it into acceptable length lines
$data = chunk_split(base64_encode($data));
}
// now we'll insert a boundary to indicate we're starting the attachment
// we have to specify the content type, file name, and disposition as
// an attachment, then add the file content.
// NOTE: we don't set another boundary to indicate that the end of the
// file has been reached here. we only want one boundary between each file
// we'll add the final one after the loop finishes.
$message .= "--{$mime_boundary}\n" .
"Content-Type: {$type};\n" .
" name=\"{$name}\"\n" .
"Content-Disposition: attachment;\n" .
" filename=\"{$fileatt_name}\"\n" .
"Content-Transfer-Encoding: base64\n\n" .
$data . "\n\n";
}
}
$ok = @mail($to, $subject, $message , $headers);
if ($ok) {
if (($_FILES["file"]["type"] == "image/gif")
|| ($_FILES["file"]["type"] == "image/jpeg")
|| ($_FILES["file"]["type"] == "image/pjpeg")
&& ($_FILES["file"]["size"] < 20000))
{
if ($_FILES["file"]["error"] > 0)
{
echo "Return Code: " . $_FILES["file"]["error"] . "<br />";
}
else
{
if (file_exists("upload/" . $_FILES["file"]["name"]))
{
echo $_FILES["file"]["name"] . " already exists. ";
}
else
{
move_uploaded_file($_FILES["file"]["tmp_name"],
"upload/" . $_FILES["file"]["name"]);
}
}
}
else
{
}
echo "<span class='red'>E-mail has been sent successfully from $mail_name to $to</span>"; }
else{
echo "<span class='red'>Failed to send the E-mail from $from to $to</span>";
}
}
?>
p/s:I used this code.hope its work and assist you.just copy and paste.make sure your textfield name is same as in this page.its work for all types of files.for further questions,just email me at shah@mc-oren.com.anyway,i also in learning process.=).thanks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: How can I trigger Core Animation on an animator proxy during a call to resizeSubviewsWithOldSize? I have some NSViews that I'm putting in one of two layouts depending on the size of my window.
I'm adjusting the layout when the relevant superview receives the resizeSubviewsWithOldSize method.
This works, but I'd like to animate the change. So naturally I tried calling the animator proxy when I set the new frames, but the animation won't run while the user is still dragging. If I release the mouse before the animation is scheduled to be done I can see the tail end of the animation, but nothing until then. I tried making sure kCATransactionDisableActions was set to NO, but that didn't help.
Is it possible to start a new animation and actually have it run during the resize?
A: I don't think you can do this easily because CA's animations are run via a timer and the timer won't fire during the runloop modes that are active while the user is dragging.
If you can control the runloop as the user is dragging, play around with the runloop modes. That'll make it work. I don't think you can change it on the CA side.
A: This really isn't an answer, but I would advise against animating anything while dragging to resize a window. The screen is already animating (from the window moving) - further animations are likely going to be visually confusing and extraneous.
CoreAnimation effects are best used to move from one known state to another - for example, when a preference window is resizing to accompany a new pane's contents, and you know both the old and new sizes, or when you are fading an object in or out (or both). Doing animation while the window is resizing is going to be visually confusing and make it harder for the user to focus on getting the size of the window where they want it to be.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Given a DateTime object, how do I get an ISO 8601 date in string format? Given:
DateTime.UtcNow
How do I get a string which represents the same value in an ISO 8601-compliant format?
Note that ISO 8601 defines a number of similar formats. The specific format I am looking for is:
yyyy-MM-ddTHH:mm:ssZ
A:
Note to readers: Several commenters have pointed out some problems in this answer (related particularly to the first suggestion). Refer to the comments section for more information.
DateTime.UtcNow.ToString("yyyy-MM-ddTHH\\:mm\\:ss.fffffffzzz", CultureInfo.InvariantCulture);
Using custom date-time formatting, this gives you a date similar to
2008-09-22T13:57:31.2311892-04:00.
Another way is:
DateTime.UtcNow.ToString("o", CultureInfo.InvariantCulture);
which uses the standard "round-trip" style (ISO 8601) to give you
2008-09-22T14:01:54.9571247Z.
To get the specified format, you can use:
DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ssZ", CultureInfo.InvariantCulture)
A: To convert DateTime.UtcNow to a string representation of yyyy-MM-ddTHH:mm:ssZ, you can use the ToString() method of the DateTime structure with a custom formatting string. When using custom format strings with a DateTime, it is important to remember that you need to escape your seperators using single quotes.
The following will return the string represention you wanted:
DateTime.UtcNow.ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss'Z'", DateTimeFormatInfo.InvariantInfo)
A:
The "s" standard format specifier represents a custom date and time format string that is defined by the DateTimeFormatInfo.SortableDateTimePattern property. The pattern reflects a defined standard (ISO 8601), and the property is read-only. Therefore, it is always the same, regardless of the culture used or the format provider supplied. The custom format string is "yyyy'-'MM'-'dd'T'HH':'mm':'ss".
When this standard format specifier is used, the formatting or parsing operation always uses the invariant culture.
– from MSDN
A: It is interesting that custom format "yyyy-MM-ddTHH:mm:ssK" (without ms) is the quickest format method.
Also it is interesting that "S" format is slow on Classic and fast on Core...
Of course numbers are very close, between some rows difference is insignificant (tests with suffix _Verify are the same as those that are without that suffix, demonstrates results repeatability)
BenchmarkDotNet=v0.10.5, OS=Windows 10.0.14393
Processor=Intel Core i5-2500K CPU 3.30GHz (Sandy Bridge), ProcessorCount=4
Frequency=3233539 Hz, Resolution=309.2587 ns, Timer=TSC
[Host] : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1637.0
Clr : Clr 4.0.30319.42000, 64bit RyuJIT-v4.6.1637.0
Core : .NET Core 4.6.25009.03, 64bit RyuJIT
Method | Job | Runtime | Mean | Error | StdDev | Median | Min | Max | Rank | Gen 0 | Allocated |
--------------------- |----- |-------- |-----------:|----------:|----------:|-----------:|-----------:|-----------:|-----:|-------:|----------:|
CustomDev1 | Clr | Clr | 1,089.0 ns | 22.179 ns | 20.746 ns | 1,079.9 ns | 1,068.9 ns | 1,133.2 ns | 8 | 0.1086 | 424 B |
CustomDev2 | Clr | Clr | 1,032.3 ns | 19.897 ns | 21.289 ns | 1,024.7 ns | 1,000.3 ns | 1,072.0 ns | 7 | 0.1165 | 424 B |
CustomDev2WithMS | Clr | Clr | 1,168.2 ns | 16.543 ns | 15.474 ns | 1,168.5 ns | 1,149.3 ns | 1,189.2 ns | 10 | 0.1625 | 592 B |
FormatO | Clr | Clr | 1,563.7 ns | 31.244 ns | 54.721 ns | 1,532.5 ns | 1,497.8 ns | 1,703.5 ns | 14 | 0.2897 | 976 B |
FormatS | Clr | Clr | 1,243.5 ns | 24.615 ns | 31.130 ns | 1,229.3 ns | 1,200.6 ns | 1,324.2 ns | 13 | 0.2865 | 984 B |
FormatS_Verify | Clr | Clr | 1,217.6 ns | 11.486 ns | 10.744 ns | 1,216.2 ns | 1,205.5 ns | 1,244.3 ns | 12 | 0.2885 | 984 B |
CustomFormatK | Clr | Clr | 912.2 ns | 17.915 ns | 18.398 ns | 916.6 ns | 878.3 ns | 934.1 ns | 4 | 0.0629 | 240 B |
CustomFormatK_Verify | Clr | Clr | 894.0 ns | 3.877 ns | 3.626 ns | 893.8 ns | 885.1 ns | 900.0 ns | 3 | 0.0636 | 240 B |
CustomDev1 | Core | Core | 989.1 ns | 12.550 ns | 11.739 ns | 983.8 ns | 976.8 ns | 1,015.5 ns | 6 | 0.1101 | 423 B |
CustomDev2 | Core | Core | 964.3 ns | 18.826 ns | 23.809 ns | 954.1 ns | 935.5 ns | 1,015.6 ns | 5 | 0.1267 | 423 B |
CustomDev2WithMS | Core | Core | 1,136.0 ns | 21.914 ns | 27.714 ns | 1,138.1 ns | 1,099.9 ns | 1,200.2 ns | 9 | 0.1752 | 590 B |
FormatO | Core | Core | 1,201.5 ns | 16.262 ns | 15.211 ns | 1,202.3 ns | 1,178.2 ns | 1,225.5 ns | 11 | 0.0656 | 271 B |
FormatS | Core | Core | 993.5 ns | 19.272 ns | 24.372 ns | 999.4 ns | 954.2 ns | 1,029.5 ns | 6 | 0.0633 | 279 B |
FormatS_Verify | Core | Core | 1,003.1 ns | 17.577 ns | 16.442 ns | 1,009.2 ns | 976.1 ns | 1,024.3 ns | 6 | 0.0674 | 279 B |
CustomFormatK | Core | Core | 878.2 ns | 17.017 ns | 20.898 ns | 877.7 ns | 851.4 ns | 928.1 ns | 2 | 0.0555 | 215 B |
CustomFormatK_Verify | Core | Core | 863.6 ns | 3.968 ns | 3.712 ns | 863.0 ns | 858.6 ns | 870.8 ns | 1 | 0.0550 | 215 B |
Code:
public class BenchmarkDateTimeFormat
{
public static DateTime dateTime = DateTime.Now;
[Benchmark]
public string CustomDev1()
{
var d = dateTime.ToUniversalTime();
var sb = new StringBuilder(20);
sb.Append(d.Year).Append("-");
if (d.Month <= 9)
sb.Append("0");
sb.Append(d.Month).Append("-");
if (d.Day <= 9)
sb.Append("0");
sb.Append(d.Day).Append("T");
if (d.Hour <= 9)
sb.Append("0");
sb.Append(d.Hour).Append(":");
if (d.Minute <= 9)
sb.Append("0");
sb.Append(d.Minute).Append(":");
if (d.Second <= 9)
sb.Append("0");
sb.Append(d.Second).Append("Z");
var text = sb.ToString();
return text;
}
[Benchmark]
public string CustomDev2()
{
var u = dateTime.ToUniversalTime();
var sb = new StringBuilder(20);
var y = u.Year;
var d = u.Day;
var M = u.Month;
var h = u.Hour;
var m = u.Minute;
var s = u.Second;
sb.Append(y).Append("-");
if (M <= 9)
sb.Append("0");
sb.Append(M).Append("-");
if (d <= 9)
sb.Append("0");
sb.Append(d).Append("T");
if (h <= 9)
sb.Append("0");
sb.Append(h).Append(":");
if (m <= 9)
sb.Append("0");
sb.Append(m).Append(":");
if (s <= 9)
sb.Append("0");
sb.Append(s).Append("Z");
var text = sb.ToString();
return text;
}
[Benchmark]
public string CustomDev2WithMS()
{
var u = dateTime.ToUniversalTime();
var sb = new StringBuilder(23);
var y = u.Year;
var d = u.Day;
var M = u.Month;
var h = u.Hour;
var m = u.Minute;
var s = u.Second;
var ms = u.Millisecond;
sb.Append(y).Append("-");
if (M <= 9)
sb.Append("0");
sb.Append(M).Append("-");
if (d <= 9)
sb.Append("0");
sb.Append(d).Append("T");
if (h <= 9)
sb.Append("0");
sb.Append(h).Append(":");
if (m <= 9)
sb.Append("0");
sb.Append(m).Append(":");
if (s <= 9)
sb.Append("0");
sb.Append(s).Append(".");
sb.Append(ms).Append("Z");
var text = sb.ToString();
return text;
}
[Benchmark]
public string FormatO()
{
var text = dateTime.ToUniversalTime().ToString("o");
return text;
}
[Benchmark]
public string FormatS()
{
var text = string.Concat(dateTime.ToUniversalTime().ToString("s"),"Z");
return text;
}
[Benchmark]
public string FormatS_Verify()
{
var text = string.Concat(dateTime.ToUniversalTime().ToString("s"), "Z");
return text;
}
[Benchmark]
public string CustomFormatK()
{
var text = dateTime.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssK");
return text;
}
[Benchmark]
public string CustomFormatK_Verify()
{
var text = dateTime.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssK");
return text;
}
}
https://github.com/dotnet/BenchmarkDotNet was used
A: Use:
private void TimeFormats()
{
DateTime localTime = DateTime.Now;
DateTime utcTime = DateTime.UtcNow;
DateTimeOffset localTimeAndOffset = new DateTimeOffset(localTime, TimeZoneInfo.Local.GetUtcOffset(localTime));
//UTC
string strUtcTime_o = utcTime.ToString("o");
string strUtcTime_s = utcTime.ToString("s");
string strUtcTime_custom = utcTime.ToString("yyyy-MM-ddTHH:mm:ssK");
//Local
string strLocalTimeAndOffset_o = localTimeAndOffset.ToString("o");
string strLocalTimeAndOffset_s = localTimeAndOffset.ToString("s");
string strLocalTimeAndOffset_custom = utcTime.ToString("yyyy-MM-ddTHH:mm:ssK");
//Output
Response.Write("<br/>UTC<br/>");
Response.Write("strUtcTime_o: " + strUtcTime_o + "<br/>");
Response.Write("strUtcTime_s: " + strUtcTime_s + "<br/>");
Response.Write("strUtcTime_custom: " + strUtcTime_custom + "<br/>");
Response.Write("<br/>Local Time<br/>");
Response.Write("strLocalTimeAndOffset_o: " + strLocalTimeAndOffset_o + "<br/>");
Response.Write("strLocalTimeAndOffset_s: " + strLocalTimeAndOffset_s + "<br/>");
Response.Write("strLocalTimeAndOffset_custom: " + strLocalTimeAndOffset_custom + "<br/>");
}
OUTPUT
UTC
strUtcTime_o: 2012-09-17T22:02:51.4021600Z
strUtcTime_s: 2012-09-17T22:02:51
strUtcTime_custom: 2012-09-17T22:02:51Z
Local Time
strLocalTimeAndOffset_o: 2012-09-17T15:02:51.4021600-07:00
strLocalTimeAndOffset_s: 2012-09-17T15:02:51
strLocalTimeAndOffset_custom: 2012-09-17T22:02:51Z
Sources:
*
*Standard Date and Time Format Strings (MSDN)
*Custom Date and Time Format Strings (MSDN)
A: DateTime.UtcNow.ToString("s", System.Globalization.CultureInfo.InvariantCulture) should give you what you are looking for as the "s" format specifier is described as a sortable date/time pattern; conforms to ISO 8601.
EDIT: To get the additional Z at the end as the OP requires, use "o" instead of "s".
A: System.DateTime.UtcNow.ToString("o")
=>
val it : string = "2013-10-13T13:03:50.2950037Z"
A: Using Newtonsoft.Json, you can do
JsonConvert.SerializeObject(DateTime.UtcNow)
Example: https://dotnetfiddle.net/O2xFSl
A: Surprised that no one suggested it:
System.DateTime.UtcNow.ToString("u").Replace(' ','T')
# Using PowerShell Core to demo
# Lowercase "u" format
[System.DateTime]::UtcNow.ToString("u")
> 2020-02-06 01:00:32Z
# Lowercase "u" format with replacement
[System.DateTime]::UtcNow.ToString("u").Replace(' ','T')
> 2020-02-06T01:00:32Z
The UniversalSortableDateTimePattern gets you almost all the way to what you want (which is more an RFC 3339 representation).
Added:
I decided to use the benchmarks that were in answer https://stackoverflow.com/a/43793679/653058 to compare how this performs.
tl:dr; it's at the expensive end but still just a little over 650 nanoseconds on my crappy old laptop :-)
Implementation:
[Benchmark]
public string ReplaceU()
{
var text = dateTime.ToUniversalTime().ToString("u").Replace(' ', 'T');
return text;
}
Results:
// * Summary *
BenchmarkDotNet=v0.11.5, OS=Windows 10.0.19002
Intel Xeon CPU E3-1245 v3 3.40GHz, 1 CPU, 8 logical and 4 physical cores
.NET Core SDK=3.0.100
[Host] : .NET Core 3.0.0 (CoreCLR 4.700.19.46205, CoreFX 4.700.19.46214), 64bit RyuJIT
DefaultJob : .NET Core 3.0.0 (CoreCLR 4.700.19.46205, CoreFX 4.700.19.46214), 64bit RyuJIT
| Method | Mean | Error | StdDev |
|--------------------- |---------:|----------:|----------:|
| CustomDev1 | 562.4 ns | 11.135 ns | 10.936 ns |
| CustomDev2 | 525.3 ns | 3.322 ns | 3.107 ns |
| CustomDev2WithMS | 609.9 ns | 9.427 ns | 8.356 ns |
| FormatO | 356.6 ns | 6.008 ns | 5.620 ns |
| FormatS | 589.3 ns | 7.012 ns | 6.216 ns |
| FormatS_Verify | 599.8 ns | 12.054 ns | 11.275 ns |
| CustomFormatK | 549.3 ns | 4.911 ns | 4.594 ns |
| CustomFormatK_Verify | 539.9 ns | 2.917 ns | 2.436 ns |
| ReplaceU | 615.5 ns | 12.313 ns | 11.517 ns |
// * Hints *
Outliers
BenchmarkDateTimeFormat.CustomDev2WithMS: Default -> 1 outlier was removed (668.16 ns)
BenchmarkDateTimeFormat.FormatS: Default -> 1 outlier was removed (621.28 ns)
BenchmarkDateTimeFormat.CustomFormatK: Default -> 1 outlier was detected (542.55 ns)
BenchmarkDateTimeFormat.CustomFormatK_Verify: Default -> 2 outliers were removed (557.07 ns, 560.95 ns)
// * Legends *
Mean : Arithmetic mean of all measurements
Error : Half of 99.9% confidence interval
StdDev : Standard deviation of all measurements
1 ns : 1 Nanosecond (0.000000001 sec)
// ***** BenchmarkRunner: End *****
A: You have a few options including the "Round-trip ("O") format specifier".
var date1 = new DateTime(2008, 3, 1, 7, 0, 0);
Console.WriteLine(date1.ToString("O"));
Console.WriteLine(date1.ToString("s", System.Globalization.CultureInfo.InvariantCulture));
Output
2008-03-01T07:00:00.0000000
2008-03-01T07:00:00
However, DateTime + TimeZone may present other problems as described in the blog post DateTime and DateTimeOffset in .NET: Good practices and common pitfalls:
DateTime has countless traps in it that are designed to give your code bugs:
1.- DateTime values with DateTimeKind.Unspecified are bad news.
2.- DateTime doesn't care about UTC/Local when doing comparisons.
3.- DateTime values are not aware of standard format strings.
4.- Parsing a string that has a UTC marker with DateTime does not guarantee a UTC time.
A: You can get the "Z" (ISO 8601 UTC) with the next code:
Dim tmpDate As DateTime = New DateTime(Now.Ticks, DateTimeKind.Utc)
Dim res as String = tmpDate.toString("o") '2009-06-15T13:45:30.0000000Z
Here is why:
The ISO 8601 have some different formats:
DateTimeKind.Local
2009-06-15T13:45:30.0000000-07:00
DateTimeKind.Utc
2009-06-15T13:45:30.0000000Z
DateTimeKind.Unspecified
2009-06-15T13:45:30.0000000
.NET provides us with an enum with those options:
'2009-06-15T13:45:30.0000000-07:00
Dim strTmp1 As String = New DateTime(Now.Ticks, DateTimeKind.Local).ToString("o")
'2009-06-15T13:45:30.0000000Z
Dim strTmp2 As String = New DateTime(Now.Ticks, DateTimeKind.Utc).ToString("o")
'2009-06-15T13:45:30.0000000
Dim strTmp3 As String = New DateTime(Now.Ticks, DateTimeKind.Unspecified).ToString("o")
Note: If you apply the Visual Studio 2008 "watch utility" to the toString("o") part you may get different results, I don't know if it's a bug, but in this case you have better results using a String variable if you're debugging.
Source: Standard Date and Time Format Strings (MSDN)
A: If you're developing under SharePoint 2010 or higher you can use
using Microsoft.SharePoint;
using Microsoft.SharePoint.Utilities;
...
string strISODate = SPUtility.CreateISO8601DateTimeFromSystemDateTime(DateTime.Now)
A: To format like 2018-06-22T13:04:16 which can be passed in the URI of an API use:
public static string FormatDateTime(DateTime dateTime)
{
return dateTime.ToString("s", System.Globalization.CultureInfo.InvariantCulture);
}
A: I would just use XmlConvert:
XmlConvert.ToString(DateTime.UtcNow, XmlDateTimeSerializationMode.RoundtripKind);
It will automatically preserve the time zone.
A: Most of these answers have milliseconds / microseconds which clearly isn't supported by ISO 8601. The correct answer would be:
System.DateTime.Now.ToString("yyyy-MM-ddTHH:mm:ssK");
// or
System.DateTime.Now.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssK");
References:
*
*ISO 8601 specification
*"K" Specifier
A: DateTime.Now.ToString("yyyy-MM-dd'T'HH:mm:ss zzz");
DateTime.Now.ToString("O");
NOTE: Depending on the conversion you are doing on your end, you will be using the first line (most like it) or the second one.
Make sure to applied format only at local time, since "zzz" is the time zone information for UTC conversion.
A: DateTime.UtcNow.ToString("s")
Returns something like 2008-04-10T06:30:00
UtcNow obviously returns a UTC time so there is no harm in:
string.Concat(DateTime.UtcNow.ToString("s"), "Z")
A: As mentioned in other answer, DateTime has issues by design.
NodaTime
I suggest to use NodaTime to manage date/time values:
*
*Local time, date, datetime
*Global time
*Time with timezone
*Period
*Duration
Formatting
So, to create and format ZonedDateTime you can use the following code snippet:
var instant1 = Instant.FromUtc(2020, 06, 29, 10, 15, 22);
var utcZonedDateTime = new ZonedDateTime(instant1, DateTimeZone.Utc);
utcZonedDateTime.ToString("yyyy-MM-ddTHH:mm:ss'Z'", CultureInfo.InvariantCulture);
// 2020-06-29T10:15:22Z
var instant2 = Instant.FromDateTimeUtc(new DateTime(2020, 06, 29, 10, 15, 22, DateTimeKind.Utc));
var amsterdamZonedDateTime = new ZonedDateTime(instant2, DateTimeZoneProviders.Tzdb["Europe/Amsterdam"]);
amsterdamZonedDateTime.ToString("yyyy-MM-ddTHH:mm:ss'Z'", CultureInfo.InvariantCulture);
// 2020-06-29T12:15:22Z
For me NodaTime code looks quite verbose. But types are really useful. They help to handle date/time values correctly.
Newtonsoft.Json
To use NodaTime with Newtonsoft.Json you need to add reference to NodaTime.Serialization.JsonNet NuGet package and configure JSON options.
services
.AddMvc()
.AddJsonOptions(options =>
{
var settings=options.SerializerSettings;
settings.DateParseHandling = DateParseHandling.None;
settings.ConfigureForNodaTime(DateTimeZoneProviders.Tzdb);
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "998"
}
|
Q: PHP and MS Access How can we connect a PHP script to MS Access (.mdb) file?
I tried by including following PHP code:
$db_path = $_SERVER['DOCUMENT_ROOT'] . '\WebUpdate\\' . $file_name . '.mdb';
$cfg_dsn = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" . $db_path;
$odbcconnect = odbc_connect($cfg_dsn, '', '');
But it failed and I received following error message:
Warning: odbc_connect() [function.odbc-connect]: SQL error: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified, SQL state IM002 in SQLConnect in C:\web\WebUpdate\index.php on line 41
A: Here's a sample for a connect and a simple select...
<?php
$db_conn = new COM("ADODB.Connection");
$connstr = "DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=". realpath("./Northwind.mdb").";";
$db_conn->open($connstr);
$rS = $db_conn->execute("SELECT * FROM Employees");
$f1 = $rS->Fields(0);
$f2 = $rS->Fields(1);
while (!$rS->EOF)
{
print $f1->value." ".$f2->value."<br />\n";
$rS->MoveNext();
}
$rS->Close();
$db_conn->Close();
?>
A: In the filename, I'm looking at '\WebUpdate\' - it looks like you have one backslash at the beginning at two at the end. Are you maybe missing a backslash at the beginning?
A: $db_path = $_SERVER['DOCUMENT_ROOT'] . '\WebUpdate\\' . $file_name . '.mdb';
replace the backslashes with slashes use . '/WebUpdate/' .
A: it looks like a problem with the path seperators. ISTR that you have to pass backslashes not forward slashes
The following works for me - with an MDB file in the webroot called db4
$defdir = str_replace("/", "\\", $_SERVER["DOCUMENT_ROOT"]);
$dbq = $defdir . "\\db4.mdb";
if (!file_exists($dbq)) { die("Database file $dbq does not exist"); }
$dsn = "DRIVER=Microsoft Access Driver (*.mdb);UID=admin;UserCommitSync=Yes;Threads=3;SafeTransactions=0;PageTimeout=5;MaxScanRows=8;MaxBufferSize=2048;FIL=MS Access;DriverId=25;DefaultDir=$defdir;DBQ=$dbq";
$odbc_conn = odbc_connect($dsn,"","")
or die("Could not connect to Access database $dsn");
A: I'm not certain if this is a violation of best practices or security, but I would like to throw out this suggestion:
set up an ODBC connection and include the database's password in the odbc advance settings.
give the odbc conn a DSN name then save.
in your code, just set up the connection like:
try {
$conn = @odbc_connect("DSNName", "", "", "SQL_CUR_USE_ODBC");
// un and pw parameters are passed as empty strings since the DSN
// has knowledge of the password already.
// 4th parameter is optional
$exec = @odbc_exec($conn, $insert) or die ("exec error");
echo "success!";
}
catch (Exception $e) {
echo $e->getMessage();
} // end try catch
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/114996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I generate "migration" DDL from NHibernate mapping files? I'm using NHibernate 2 and PostgreSQL in my project. SchemaExport class does a great job generating DDL scheme for database, but it's great until the first application.
Is there any way to generate "migration" DLL (batch of "ALTER TABLE"'s instead of DROP/CREATE pair) using NHibernate mapping files?
A: Look into SchemaUpdate. Very similiar API as SchemaExport but it only creates migrations.
A: While SchemaUpdate very much answers my needs, it still has several problems. For example it refuses to put a new restriction on existing database column even if it's not gonna conflict with existing data.
I'm going froward to extend SchemaUpdate a little bit or, if fail, switch to one of that hand driven migration tools (for example Rails one).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How can we print line numbers to the log in java How to print line numbers to the log. Say when outputting some information to the log, I also want to print the line number where that output is in the source code. As we can see in the stack trace, it displays the line number where the exception has occurred. Stack trace is available on the exception object.
Other alternative could be like manually including the line number when printing to the log. Is there any other way?
A: We ended up using a custom class like this for our Android work:
import android.util.Log;
public class DebugLog {
public final static boolean DEBUG = true;
public static void log(String message) {
if (DEBUG) {
String fullClassName = Thread.currentThread().getStackTrace()[2].getClassName();
String className = fullClassName.substring(fullClassName.lastIndexOf(".") + 1);
String methodName = Thread.currentThread().getStackTrace()[2].getMethodName();
int lineNumber = Thread.currentThread().getStackTrace()[2].getLineNumber();
Log.d(className + "." + methodName + "():" + lineNumber, message);
}
}
}
A: The code posted by @simon.buchan will work...
Thread.currentThread().getStackTrace()[2].getLineNumber()
But if you call it in a method it will always return the line number of the line in the method so rather use the code snippet inline.
A: I would recommend using a logging toolkit such as log4j. Logging is configurable via properties files at runtime, and you can turn on / off features such as line number / filename logging.
Looking at the javadoc for the PatternLayout gives you the full list of options - what you're after is %L.
A: I use this little method that outputs the trace and line number of the method that called it.
Log.d(TAG, "Where did i put this debug code again? " + Utils.lineOut());
Double click the output to go to that source code line!
You might need to adjust the level value depending on where you put your code.
public static String lineOut() {
int level = 3;
StackTraceElement[] traces;
traces = Thread.currentThread().getStackTrace();
return (" at " + traces[level] + " " );
}
A: Quick and dirty way:
System.out.println("I'm in line #" +
new Exception().getStackTrace()[0].getLineNumber());
With some more details:
StackTraceElement l = new Exception().getStackTrace()[0];
System.out.println(
l.getClassName()+"/"+l.getMethodName()+":"+l.getLineNumber());
That will output something like this:
com.example.mytest.MyClass/myMethod:103
A: I am compelled to answer by not answering your question. I'm assuming that you are looking for the line number solely to support debugging. There are better ways. There are hackish ways to get the current line. All I've seen are slow. You are better off using a logging framework like that in java.util.logging package or log4j. Using these packages you can configure your logging information to include context down to the class name. Then each log message would be unique enough to know where it came from. As a result, your code will have a 'logger' variable that you call via
logger.debug("a really descriptive message")
instead of
System.out.println("a really descriptive message")
A: Log4J allows you to include the line number as part of its output pattern. See http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html for details on how to do this (the key element in the conversion pattern is "L"). However, the Javadoc does include the following:
WARNING Generating caller location
information is extremely slow. It's
use should be avoided unless execution
speed is not an issue.
A: From Angsuman Chakraborty (archived) :
/** Get the current line number.
* @return int - Current line number.
*/
public static int getLineNumber() {
return Thread.currentThread().getStackTrace()[2].getLineNumber();
}
A: You can't guarantee line number consistency with code, especially if it is compiled for release. I would not recommend using line numbers for that purpose anyway, it would be better to give a payload of the place where the exception was raised (the trivial method being to set the message to include the details of the method call).
You might like to look at exception enrichment as a technique to improve exception handling
http://tutorials.jenkov.com/java-exception-handling/exception-enrichment.html
A: If it's been compiled for release this isn't possible. You might want to look into something like Log4J which will automatically give you enough information to determine pretty closely where the logged code occurred.
A: first the general method (in an utility class, in plain old java1.4 code though, you may have to rewrite it for java1.5 and more)
/**
* Returns the first "[class#method(line)]: " of the first class not equal to "StackTraceUtils" and aclass. <br />
* Allows to get past a certain class.
* @param aclass class to get pass in the stack trace. If null, only try to get past StackTraceUtils.
* @return "[class#method(line)]: " (never empty, because if aclass is not found, returns first class past StackTraceUtils)
*/
public static String getClassMethodLine(final Class aclass) {
final StackTraceElement st = getCallingStackTraceElement(aclass);
final String amsg = "[" + st.getClassName() + "#" + st.getMethodName() + "(" + st.getLineNumber()
+")] <" + Thread.currentThread().getName() + ">: ";
return amsg;
}
Then the specific utility method to get the right stackElement:
/**
* Returns the first stack trace element of the first class not equal to "StackTraceUtils" or "LogUtils" and aClass. <br />
* Stored in array of the callstack. <br />
* Allows to get past a certain class.
* @param aclass class to get pass in the stack trace. If null, only try to get past StackTraceUtils.
* @return stackTraceElement (never null, because if aClass is not found, returns first class past StackTraceUtils)
* @throws AssertionFailedException if resulting statckTrace is null (RuntimeException)
*/
public static StackTraceElement getCallingStackTraceElement(final Class aclass) {
final Throwable t = new Throwable();
final StackTraceElement[] ste = t.getStackTrace();
int index = 1;
final int limit = ste.length;
StackTraceElement st = ste[index];
String className = st.getClassName();
boolean aclassfound = false;
if(aclass == null) {
aclassfound = true;
}
StackTraceElement resst = null;
while(index < limit) {
if(shouldExamine(className, aclass) == true) {
if(resst == null) {
resst = st;
}
if(aclassfound == true) {
final StackTraceElement ast = onClassfound(aclass, className, st);
if(ast != null) {
resst = ast;
break;
}
}
else
{
if(aclass != null && aclass.getName().equals(className) == true) {
aclassfound = true;
}
}
}
index = index + 1;
st = ste[index];
className = st.getClassName();
}
if(isNull(resst)) {
throw new AssertionFailedException(StackTraceUtils.getClassMethodLine() + " null argument:" + "stack trace should null"); //$NON-NLS-1$
}
return resst;
}
static private boolean shouldExamine(String className, Class aclass) {
final boolean res = StackTraceUtils.class.getName().equals(className) == false && (className.endsWith(LOG_UTILS
) == false || (aclass !=null && aclass.getName().endsWith(LOG_UTILS)));
return res;
}
static private StackTraceElement onClassfound(Class aclass, String className, StackTraceElement st) {
StackTraceElement resst = null;
if(aclass != null && aclass.getName().equals(className) == false)
{
resst = st;
}
if(aclass == null)
{
resst = st;
}
return resst;
}
A: Here is the logger that we use.
it wraps around Android Logger and display class name, method name and line number.
http://www.hautelooktech.com/2011/08/15/android-logging/
A: Look at this link. In that method you can jump to your line code, when you double click on LogCat's row.
Also you can use this code to get line number:
public static int getLineNumber()
{
int lineNumber = 0;
StackTraceElement[] stackTraceElement = Thread.currentThread()
.getStackTrace();
int currentIndex = -1;
for (int i = 0; i < stackTraceElement.length; i++) {
if (stackTraceElement[i].getMethodName().compareTo("getLineNumber") == 0)
{
currentIndex = i + 1;
break;
}
}
lineNumber = stackTraceElement[currentIndex].getLineNumber();
return lineNumber;
}
A: private static final int CLIENT_CODE_STACK_INDEX;
static {
// Finds out the index of "this code" in the returned stack Trace - funny but it differs in JDK 1.5 and 1.6
int i = 0;
for (StackTraceElement ste : Thread.currentThread().getStackTrace()) {
i++;
if (ste.getClassName().equals(Trace.class.getName())) {
break;
}
}
CLIENT_CODE_STACK_INDEX = i;
}
private String methodName() {
StackTraceElement ste=Thread.currentThread().getStackTrace()[CLIENT_CODE_STACK_INDEX+1];
return ste.getMethodName()+":"+ste.getLineNumber();
}
A: These all get you the line numbers of your current thread and method which work great if you use a try catch where you are expecting an exception. But if you want to catch any unhandled exception then you are using the default uncaught exception handler and current thread will return the line number of the handler function, not the class method that threw the exception. Instead of using Thread.currentThread() simply use the Throwable passed in by the exception handler:
Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
public void uncaughtException(Thread t, Throwable e) {
if(fShowUncaughtMessage(e,t))
System.exit(1);
}
});
In the above use e.getStackTrace()[0] in your handler function (fShowUncaughtMessage) to get the offender.
A: Below code is tested code for logging line no class name and method name from where logging method is called
public class Utils {
/*
* debug variable enables/disables all log messages to logcat
* Useful to disable prior to app store submission
*/
public static final boolean debug = true;
/*
* l method used to log passed string and returns the
* calling file as the tag, method and line number prior
* to the string's message
*/
public static void l(String s) {
if (debug) {
String[] msg = trace(Thread.currentThread().getStackTrace(), 3);
Log.i(msg[0], msg[1] + s);
} else {
return;
}
}
/*
* l (tag, string)
* used to pass logging messages as normal but can be disabled
* when debug == false
*/
public static void l(String t, String s) {
if (debug) {
Log.i(t, s);
} else {
return;
}
}
/*
* trace
* Gathers the calling file, method, and line from the stack
* returns a string array with element 0 as file name and
* element 1 as method[line]
*/
public static String[] trace(final StackTraceElement e[], final int level) {
if (e != null && e.length >= level) {
final StackTraceElement s = e[level];
if (s != null) { return new String[] {
e[level].getFileName(), e[level].getMethodName() + "[" + e[level].getLineNumber() + "]"
};}
}
return null;
}
}
A: The stackLevel depends on depth you call this method. You can try from 0 to a large number to see what difference.
If stackLevel is legal, you will get string like java.lang.Thread.getStackTrace(Thread.java:1536)
public static String getCodeLocationInfo(int stackLevel) {
StackTraceElement[] stackTraceElements = Thread.currentThread().getStackTrace();
if (stackLevel < 0 || stackLevel >= stackTraceElements.length) {
return "Stack Level Out Of StackTrace Bounds";
}
StackTraceElement stackTraceElement = stackTraceElements[stackLevel];
String fullClassName = stackTraceElement.getClassName();
String methodName = stackTraceElement.getMethodName();
String fileName = stackTraceElement.getFileName();
int lineNumber = stackTraceElement.getLineNumber();
return String.format("%s.%s(%s:%s)", fullClassName, methodName, fileName, lineNumber);
}
A: This is exactly the feature I implemented in this lib
XDDLib. (But, it's for android)
Lg.d("int array:", intArrayOf(1, 2, 3), "int list:", listOf(4, 5, 6))
One click on the underlined text to navigate to where the log command is
That StackTraceElement is determined by the first element outside this library. Thus, anywhere outside this lib will be legal, including lambda expression, static initialization block, etc.
A: For anyone wondering, the index in the getStackTrace()[3] method signals the amount of threads the triggering line travels until the actual .getStackTrace() method excluding the executing line.
This means that if the Thread.currentThread().getStackTrace()[X].getLineNumber(); line is executed from 3 nested methods above, the index number must be 3.
Example:
First layer
private static String message(String TAG, String msg) {
int lineNumber = Thread.currentThread().getStackTrace()[3].getLineNumber();
return ".(" + TAG + ".java:"+ lineNumber +")" + " " + msg;
}
Second Layer
private static void print(String s) {
System.out.println(s);
}
Third Layer
public static void normal(
String TAG,
String message
) {
print(
message(
TAG,
message
)
);
}
Executing Line:
Print.normal(TAG, "StatelessDispatcher");
As someone that has not received any formal education on IT, this has been mind opening on how compilers work.
A: This is the code which prints the line number.
Thread.currentThread().getStackTrace()[2].getLineNumber()
Create a global public static method to make printing Logs easier.
public static void Loge(Context context, String strMessage, int strLineNumber) {
Log.e(context.getClass().getSimpleName(), strLineNumber + " : " + strMessage);
}
A: you can use -> Reporter.log("");
A: My way it works for me
String str = "select os.name from os where os.idos="+nameid; try {
PreparedStatement stmt = conn.prepareStatement(str);
ResultSet rs = stmt.executeQuery();
if (rs.next()) {
a = rs.getString("os.n1ame");//<<<----Here is the ERROR
}
stmt.close();
} catch (SQLException e) {
System.out.println("error line : " + e.getStackTrace()[2].getLineNumber());
return a;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "148"
}
|
Q: Keep sources from external repositories up-to-date After you start tracking the source of a bunch of open source software, how do you keep your code in sync? Run svn update every time you want to look at or play with the code?
It strikes me that it would be better to essentially start mirroring the code with (say) a cron job every night. Have people set up workflows to do this sort of thing? (With alerts when/if any changes you make to the code end up conflicting with the latest update?)
Or am I on my own? (I'm running Mac OS X but interested in general as well as specific solutions.)
A: The general workflow recommended by the Subversion book is to update your working copy often; at the start of every work-day is a good time. But you don't have to. Just update whenever you feel like seeing the latest changes.
I have a number of open source repositories checked out under a src/ directory. Every couple of days, I remember to run 'svn up *' from that directory, and it updates all the working copies contained there.
A: If your repository sends you an email every time someone checks in, why not have a program that checks for those emails and then updates the working copy at that time? This way you're always up to date. Caveats include needlessly burning bandwidth and the possibility of getting odd conflicts when a file you're working on gets updated.
Just updating once a day, or once every few days, is only useful when there are a limited number of people working on a project, all in disparate areas of it. When you've got more than five people, and the possibility that they are working in similar parts of the code, updating once an hour, or more frequently, is much better.
A: I will update often really only when I use an open source library in my own application, the external repository will actually be part of my project tree, when I update my project it also updates the external repository. I think when you only look at code for research it will only make sense if you want to look at a new feature they released and then update.
A: You might want to look into using svn:externals: http://svnbook.red-bean.com/en/1.0/ch07s03.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: An Issue with converting enumerations in C++\CLI I have an assembly, written in C++\CLI, which uses some of enumerations, provided by .Net. It has such kind of properties:
property System::ServiceProcess::ServiceControllerStatus ^ Status
{
ServiceControllerStatus ^ get()
{
return (ServiceControllerStatus)_status->dwCurrentState;
}
}
it works fine, but when i use this assembly from my C# code, type of this property is
System.Enum
and i have to make type-cast
if ((ServiceControllerStatus)currentService.Status == ServiceControllerStatus.Running)
//do smth
The question is simple: why is it so, and how to fix it ?
A: In C++/CLI ^ is like the analagous * in standard C++. Because enumerations are value types the ^ should not be included otherwise you will see them as System.Enum.
Remove the ^ and you will see the correct enumeration on C# side.
property System::ServiceProcess::ServiceControllerStatus Status
{
System::ServiceProcess::ServiceControllerStatus get()
{
return (System::ServiceProcess::ServiceControllerStatus)_status->dwCurrentState;
}
}
A: I think enums don't use the ^ -- try removing it from the property declaration and get().
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Is there any reason to not use my IoC as a general Settings Repository? Suppose that the ApplicationSettings class is a general repository of settings that apply to my application such as TimeoutPeriod, DefaultUnitOfMeasure, HistoryWindowSize, etc... And let's say MyClass makes use of one of those settings - DefaultUnitOfMeasure.
My reading of proper use of Inversion of Control Containers - and please correct me if I'm wrong on this - is that you define the dependencies of a class in its constructor:
public class MyClass {
public MyClass(IDataSource ds, UnitOfMeasure default_uom) {...}
}
and then call instantiate your class with something like
var mc = IoC.Container.Resolve<MyClass>();
Where IDataSource has been assigned a concrete implementation and default_uom has been wired up to instantiate from the ApplicationSettings.DefaultUnitOfMeasure property. I've got to wonder however, if all these hoops are really that necessary to jump through. What trouble am I setting myself up for should I do
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways. It seems like I maybe should make full use of it as long as the classes are coupled. Please Agile gurus, tell me where I'm wrong.
A: I usually don't have many classes depending on my IoC container. I usually try to wrap the IoC stuff in a facade object that I inject into other classes, usually most of my IoC injection is done only in the higher layers of my application though.
If you do things your way you can't test MyClass without creating a IoC configuration for your tests. This will make your tests harder to maintain.
Another problem is that you're going to have powerusers of your software who want to change the configuration editing your IoC config files. This is something I'd want to avoid. You could split up your IoC config into a normal config file and the IoC specific stuff. But then you could just as well use the normal .Net config functionality to read the configuration.
A:
IoC.Container.Resolve("default_uom");
I see this as a classic anti-pattern, where you are using the IoC container as a service locater - the key issues that result are:
*
*Your application no longer fails-fast if your container is misconfigured (you'll only know about it the first time it tries to resolve that particular service in code, which might not occur except for a specific set of logic/circumstances).
*Harder to test - not impossible of course, but you either have to create a real (and semi-configured) instance of the windsor container for your tests or inject the singleton with a mock of IWindsorContainer - this adds a lot of friction to testing, compared to just being able to pass the mock/stub services directly into your class under test via constructors/properties.
*Harder to maintain this kind of application (configuration isn't centralized in one location)
*Violates a number of other software development principles (DRY, SOC etc.)
The concerning part of your original statement is the implication that most of your classes will have a dependency on your IoC singleton - if they're getting all the services injected in via constructors/dependencies then having some tight coupling to IoC should be the exception to the rule - In general the only time I take a dependency on the container is when I'm doing something tricky i.e. trying to avoid a circular dependency problems, or wish to create components at run-time for some reason, and even then I can often avoid taking a dependency on anything more then a generic IServiceProvider interface, allowing me to swap in a home-bake IoC or service locater implementation if I need to reuse the components in an environment outside of the original project.
A:
Yes, many of my classes end up with a dependency on IoC.Container but that is a dependency that most of my classes will have anyways.
I think this is the crux of the issue. If in fact most of your classes are coupled to the IoC container itself chances are you need to rethink your design.
Generally speaking your app should only refer to the container class directly once during the bootstrapping. After you have that first hook into the container the rest of the object graph should be entirely managed by the container and all of those objects should be oblivious to the fact that they were created by an IoC container.
A: To comment on your specific example:
public class MyClass {
public MyClass(IDataSource ds) {
UnitOfMeasure duom = IoC.Container.Resolve<UnitOfMeasure>("default_uom");
}
}
This makes it harder to re-use your class. More specifically it makes it harder to instantiate your class outside of the narrow usage pattern you are confining it to. One of the most common places this will manifest itself is when trying to test your class. It's much easier to test that class if the UnitOfMeasure can be passed to the constructor directly.
Also, your choice of name for the UOM instance ("default_uom") implies that the value could be overridden, depending on the usage of the class. In that case, you would not want to "hard-code" the value in the constructor like that.
Using the constructor injection pattern does not make your class dependent on the IoC, just the opposite it gives clients the option to use the IoC or not.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What is the format of the Remap XML file for IKVM? In this article Jeroen explains an example of using an XML file to remap Java Bean getters and setters to .NET Properties.
What would the XML file look like if I wanted to, say, remap a Java method called showDialog() to ShowDialog() in .NET? Has anyone worked with the remapping option before? Any idea where to get information on how it works other than inspecting the remapper.cs source code?
Edit #1 - Found something that definitely helps a bit: the map.xml file in the OpenJDK folder seems to have the same format.
Edit #2 Ouch. 7 views in 16 hours. :-) I have officially reached the fringes of SO knowledge... ;)
A: Seems you will have to use MapFileGenerator.java mentioned in the article referred ;-)
More info can be found on BeanInfo here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: MVC Preview 5 - Rendering A View To String For Testing I was reading a post by Brad Wilson (http://bradwilson.typepad.com/blog/2008/08/partial-renderi.html) on the new ViewEngine changes to MVC Preview 5 and thought that it would be great to be able to render a view to string for use in tests. I get the impression from the article that it may be possible to achieve this but cannot figure out how.
I believe this would enable us to do away with some of our WatIn tests (which are slow and unreliable) as it would allow us to check that the View has rendered correctly by simply checking the string for expected values/text.
Has anyone implemented something like this?
A: It's tricky. What you have to do is set the Response.Filter property to a custom stream class that you implement. The MVC Contrib project actually has examples of doing this. I'd poke around in there.
A: I think here is what you need. The RenderPartialToString function will return the controller as a string. I get it from here.
public static string RenderPartialToString(string controlName, object viewData)
{
ViewDataDictionary vd = new ViewDataDictionary(viewData);
ViewPage vp = new ViewPage { ViewData = vd };
Control control = vp.LoadControl(controlName);
vp.Controls.Add(control);
StringBuilder sb = new StringBuilder();
using (StringWriter sw = new StringWriter(sb))
{
using (HtmlTextWriter tw = new HtmlTextWriter(sw))
{
vp.RenderControl(tw);
}
}
return sb.ToString();
}
A: Moreover testing, it can be useful for components such as HTML to PDF converters.
These components usually uses 2 ways of transformation.
*
*Passing a URL to the conversion method
*Passing a HTML content (and you can optionally specify the baseUrl to resolve virtual paths)
I am using an Authorize filter inside the controller, so if I redirect to the URL the rendered HTML is the login page one (I use a custom authentication).
If I use Server.Execute(Url) to keep the context, the method fails (HttpUnhandledException: Error executing child request for /Template/Pdf/1.).
So I tried to retrieve the HTML of the rendered ViewResult but I didn't succeed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Retrieve Web Browser Stored Form Data? I have my web browsers set to save what I type into text boxes on forms. I have a lot of search terms stored in the text box of my browser and would like to get at it via a program of some sort before I clear these values out. There are far too many for me to go through one at a time.
The web browser must store this data somewhere, does anyone know where? Is it possible to retrieve these values? Firefox, more so than IE -- but either, if anyone knows a script that can extract these values? Thanks.
A: Firefox 3
In Firefox on Windows it's stored in a SQLite file, in:
C:\Documents and Settings\<Username>\Application Data
\Mozilla\Firefox\Profiles\<UID>.default\formhistory.sqlite
Once you have the SQLite file, you can put together a script to read the data from it pretty quickly - here's a good primer to using SQLite with PHP 5 for example.
Firefox pre-version 3
Apparently SQLite has only been used for the saved form history since version 3. Version 2 still uses formhistory.dat, which is written using Mork.
From the wiki on Mork:
Also, despite being plain text, Mork is generally regarded as unintelligible to humans and as a hard format to write parsers for.
There has been an item files on Bugzilla asking for a more sane and readable format to be introduced, the filer even attempted to write a perl parser for his .dat files, with limited success.
A: It seems that you can find the form history in the form of a sqlite database under USER_DIR/Mozilla/Firefox/Profiles//formhistory.sqlite
I didn't try to browse it with Sqlite but the filename seems to be explicit.
You can find several wrappers on the sqlite website to access it from the language of your choice.
Good Luck
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I prevent TFS from overwriting a label? If i make a label in TFS, and later make a new label with the same name... then the old label is replaced by the new one.
How do I prevent TFS from overwriting a label?
A: The following MSDN article covers using the Scope of a label to try and minimize the occurrences of these mishaps:
Using the /Child Option to Avoid Labeling Mishaps
If you issue a label command together with a pre-existing label name and an itemspec that includes files that are already marked by the same label, the value of the /child option determines whether the marked files are updated with new revision information. That is, the files are labeled by the same name, but have different scope.
A: Thanks, that led me on the right track.
It seems that the label overwrite is a "feature" and not a bug. It's working as designed
sixletter's link above explains it, and below are two more with info about it.
http://msdn.microsoft.com/en-us/library/ms181439(VS.80).aspx
http://msdn.microsoft.com/en-us/library/ms181440(VS.80).aspx
Apparently TFS labels are not a snaphot of a point in time like in other VCS's , though i do not fully understand the explanation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: CVS: Replace HEAD with a branch How do I replace the HEAD of a CVS repository with a branch?
A: Check out this page, which has a pretty easy to follow walk through of branching and merging in CVS
http://kb.wisc.edu/middleware/page.php?id=4087
It also includes an example of replacing HEAD with a specified branch
Replacing One Branch With Another
Tag the end of your branch
cvs tag merge_NEW_BRANCH
Switch back to the branch you're replacing
To head:
cvs up -A
To branch:
cvs up -r OLD_BRANCH
Do the replace:
Replace head
cvs up -jHEAD -j NEW_BRANCH
Replace branch
cvs up -jOLD_BRANCH -j NEW_BRANCH
Commit changes and tag if you need to.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How do you implement position-sensitive zooming inside a JScrollPane? I am trying to implement position-sensitive zooming inside a JScrollPane. The JScrollPane contains a component with a customized paint that will draw itself inside whatever space it is allocated - so zooming is as easy as using a MouseWheelListener that resizes the inner component as required.
But I also want zooming into (or out of) a point to keep that point as central as possible within the resulting zoomed-in (or -out) view (this is what I refer to as 'position-sensitive' zooming), similar to how zooming works in google maps. I am sure this has been done many times before - does anybody know the "right" way to do it under Java Swing?. Would it be better to play with Graphic2D's transformations instead of using JScrollPanes?
Sample code follows:
package test;
import java.awt.*;
import java.awt.event.*;
import java.awt.geom.*;
import javax.swing.*;
public class FPanel extends javax.swing.JPanel {
private Dimension preferredSize = new Dimension(400, 400);
private Rectangle2D[] rects = new Rectangle2D[50];
public static void main(String[] args) {
JFrame jf = new JFrame("test");
jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jf.setSize(400, 400);
jf.add(new JScrollPane(new FPanel()));
jf.setVisible(true);
}
public FPanel() {
// generate rectangles with pseudo-random coords
for (int i=0; i<rects.length; i++) {
rects[i] = new Rectangle2D.Double(
Math.random()*.8, Math.random()*.8,
Math.random()*.2, Math.random()*.2);
}
// mouse listener to detect scrollwheel events
addMouseWheelListener(new MouseWheelListener() {
public void mouseWheelMoved(MouseWheelEvent e) {
updatePreferredSize(e.getWheelRotation(), e.getPoint());
}
});
}
private void updatePreferredSize(int n, Point p) {
double d = (double) n * 1.08;
d = (n > 0) ? 1 / d : -d;
int w = (int) (getWidth() * d);
int h = (int) (getHeight() * d);
preferredSize.setSize(w, h);
getParent().doLayout();
// Question: how do I keep 'p' centered in the resulting view?
}
public Dimension getPreferredSize() {
return preferredSize;
}
private Rectangle2D r = new Rectangle2D.Float();
public void paint(Graphics g) {
super.paint(g);
g.setColor(Color.red);
int w = getWidth();
int h = getHeight();
for (Rectangle2D rect : rects) {
r.setRect(rect.getX() * w, rect.getY() * h,
rect.getWidth() * w, rect.getHeight() * h);
((Graphics2D)g).draw(r);
}
}
}
A: Tested this, seems to work...
private void updatePreferredSize(int n, Point p) {
double d = (double) n * 1.08;
d = (n > 0) ? 1 / d : -d;
int w = (int) (getWidth() * d);
int h = (int) (getHeight() * d);
preferredSize.setSize(w, h);
int offX = (int)(p.x * d) - p.x;
int offY = (int)(p.y * d) - p.y;
setLocation(getLocation().x-offX,getLocation().y-offY);
getParent().doLayout();
}
Update
Here is an explanation: the point p is the location of the mouse relative to the FPanel. Since you are scaling the size of the panel, the location of p (relative to the size of the panel) will scale by the same factor. By subtracting the current location from the scaled location, you get how much the point 'shifts' when the panel is resized. Then it is simply a matter of shifting the panel location in the scroll pane by the same amount in the opposite direction to put p back under the mouse cursor.
A: Here's a minor refactoring of @Kevin K's solution:
private void updatePreferredSize(int wheelRotation, Point stablePoint) {
double scaleFactor = findScaleFactor(wheelRotation);
scaleBy(scaleFactor);
Point offset = findOffset(stablePoint, scaleFactor);
offsetBy(offset);
getParent().doLayout();
}
private double findScaleFactor(int wheelRotation) {
double d = wheelRotation * 1.08;
return (d > 0) ? 1 / d : -d;
}
private void scaleBy(double scaleFactor) {
int w = (int) (getWidth() * scaleFactor);
int h = (int) (getHeight() * scaleFactor);
preferredSize.setSize(w, h);
}
private Point findOffset(Point stablePoint, double scaleFactor) {
int x = (int) (stablePoint.x * scaleFactor) - stablePoint.x;
int y = (int) (stablePoint.y * scaleFactor) - stablePoint.y;
return new Point(x, y);
}
private void offsetBy(Point offset) {
Point location = getLocation();
setLocation(location.x - offset.x, location.y - offset.y);
}
A: Your MouseWheelListener also has to locate the cursor, move it to the center of the JScrollPane and adjust the xmin/ymin and xmax/ymax of the content to be viewed.
A: I think smt like this should be working...
private void updatePreferredSize(int n, Point p) {
double d = (double) n * 1.08;
d = (n > 0) ? 1 / d : -d;
int w = (int) (getWidth() * d);
int h = (int) (getHeight() * d);
preferredSize.setSize(w, h);
// Question: how do I keep 'p' centered in the resulting view?
int parentWdt = this.getParent( ).getWidth( ) ;
int parentHgt = this.getParent( ).getHeight( ) ;
int newLeft = p.getLocation( ).x - ( p.x - ( parentWdt / 2 ) ) ;
int newTop = p.getLocation( ).y - ( p.y - ( parentHgt / 2 ) ) ;
this.setLocation( newLeft, newTop ) ;
getParent().doLayout();
}
EDIT:
Changed a couple things.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Use VBA in Office 2007 Applications? Is VBA going to go away any time soon, like VB6 has? Should I not develop new Office applications with VBA? Or should I be developing all new Office Apps with VSTO?
Update: Recently read this article.
A: Office VSTO offers a great deal of additional functionality over Office VBA, and while I don't believe Microsoft has signaled that it's going to terminate VBA (in fact, they've said explicitly that it will be around at least until Office 14; Office 2007 = Office 12), I think it's well worth the effort to move your applications to VSTO to take advantage of the additional flexibility and power.
I actually don't think that deprecating VBA would be feasible, since a fair amount of Office programming takes place at the macro level by business users and I don't think that's going to go away any time soon. Those folks don't generally have access to a VSTO-capable IDE.
A: VSTO has new features, but also has a number of major deficiencies compared with VBA.
For one thing, Code Access Security can make it difficult to deploy VSTO applications (that's being polite).
For another, the VSTO development environment is nowhere near as accessible to "Power User" developers as VBA. For example, no macro recorder to get you started.
And a big showstopper is that .NET interop with out-of-process COM objects doesn't work well. For example, if you want to manipulate other Office applications (Word, PowerPoint, Outlook) from within an Excel VSTO application, you will find multiple copies of these applications running in the background, for the reasons described in this KB article.
All this coupled with the huge investment in existing VBA apps means VBA won't be going away any time soon.
A: Microsoft has stated that VBA will be supported moving forward for the forseeable future, but they are also recommending that new apps use VSTO.
The latest Mac version of MS Office don't support VBA, and 64-bit Windows runs it in a virtual 32-bit out-of-process mode. So if you are planning a new application using Office as a platform, VSTO is definitely the way to go, but you shouldn't worry too much about legacy support.
As @cori points out, it would be a big marketing no-no for MS to just yank support and break so much existing software.
A: Microsoft have been dropping hints at a managed-code version of Office with an integrated VSTO (presumably in the same way as the VB6 IDE is integrated for VBA, so the VS IDE would be integrated for VSTO) ever since .NET was first released.
Given just how much coding is involved - and given that this would not produce any features that would be visible to users - I very much doubt that this is high on the Microsoft priority list. I can imagine that they layer a managed code set of objects over the top of the existing codebase (much as Joel Spolsky layered a set of COM objects over the existing C codebase when putting VBA into Excel in the first place) and bung a new IDE in as the default, while hiding the old one. Even that would be a major exercise (imagine writing the macro recorder!). Of course, this would make .NET a pre-req for Office, which the Office team will only accept at gunpoint.
They will never actually remove VBA from the products, of course - Excel still supports Excel 4 macros, and Word still has the WordBasic Automation object to support Word 6 macros, and there's no sign of either of those being removed, since there is too much legacy code to support - and no-one has used either of those coding models in a decade.
If Microsoft do ever put a .NET environment into Office (which, frankly, I doubt will ever happen), then they might stop adding VBA support for new Office features. That's the closest they'll get to discontinuing VBA.
A: Here is a comment from Microsoft regarding future VBA support. In a nutshell, it is not going away on Windows versions of Office (but is discontinued for Mac versions).
A: VBA is a long way from being depreciated, in fact VBA is to be reintroduced into the next version of Office on the MAC ( http://www.microsoft.com/presspass/press/2008/may08/05-13MacBU2008PR.mspx).
For most people on the ground, VBA and C XLLs (and VB6!!) continue to be the tools of choice. The current .NET linkages are slow and offer zero productivity gain. 3rd part tools such as ExcelDNA ease the pain somewhat but obviously the unmanaged C based (and assembler based) code base of Office doesn't sit easily with .NET.
A: VBA add-ins are a bit troublesome to deploy, but VSTO is even more so. Also, VSTO involves a bit of overhead, as it needs to start up the CLR before running your code.
But most important of all; VSTO takes away the pain of writing VBA.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Test Automation with Embedded Hardware Has anyone had success automating testing directly on embedded hardware?
Specifically, I am thinking of automating a battery of unit tests for hardware layer modules. We need to have greater confidence in our hardware layer code. A lot of our projects use interrupt driven timers, ADCs, serial io, serial SPI devices (flash memory) etc..
Is this even worth the effort?
We typically target:
Processor: 8 or 16 bit microcontrollers (some DSP stuff)
Language: C (sometimes c++).
A: Yes.
The difficulty depends on the type of hardware that you're trying to test. As others have said earlier the issue is going to be the complexity of the external stimulus that you need to apply. External stimulus is probably best achieved with some external test rig (as Adam Davis has described).
One thing to consider, though, is exactly what it is that you're trying to verify.
It's tempting to assume that to verify the interaction of the hardware and the firmware then you've really no option but to directly apply external stimulus (ie. applying DACs to all of your ADC inputs, etc.). In these cases, though, the corner cases that you really want to test are often going to be subject to issues of timing (eg. interrupts arriving when you're executing function foo()) which are going to be incredibly difficult to test in a meaningful way - and even harder to get meaningful results from. (ie. The first 100K times we ran this test it was fine. The last time we ran it it failed. Why?!?)
But the verification of the hardware should be done separately. Once this is done, unless it's changing regularly (through downloadable fpga images or the like) then you should be able to assume that the hardware works and purely test your firmware.
So in this case you can concentrate on verifying the algorithms that are used for processing your external stimulii. For example, calling your ADC conversion routines with a fixed value as if they came from your ADC directly. These tests are repeatable and therefore of benefit. They will require special test builds though.
Testing the communications paths of your device is going to be relatively straightforward and shouldn't require special code builds.
A: We have had good results with automated testing on our embedded systems. We have test written in high level (easy to program and debug) languages that run on dedicated test machines. These test generally do sanity checking or generate random inputs into the devices, then check for correct behavior. There is a lot of work to generate and maintain these tests. We designed a framework and then let interns work on the tests themselves.
It's not a perfect solution, and the tests are certainly prone to errors, but the most important part is to improve on your existing coverage holes. Find the biggest hole and design something to cover it in an automated fashion, even if it isn't perfect or won't cover the entire feature. Later when all of your stuff is covered somewhat, you can come back and address the worst coverage or the most critical features.
Some things to consider:
*
*What is the penalty of a firmware bug? How easier is it to update firmware in the field.
*What kind of coverage do my test provide? Is it a simple sanity check? Is it configurable enough that it can test many different scenarios?
*Once a test has failed, how will you reproduce that value in order to debug it? Did you log all the device and test settings so you can eliminate as many variables as possible? Device configuration, firmware version, test software version, all external inputs, all observed behavior?
*What are you testing against? Is the spec clear enough on what the expected behavior of the device you are testing or are you validating against what you think the code should do?
A: If your goal is to test your low-level driver code you will likely need to create some sort of test fixture, using loopback cables or multiple interconnected units to allow you to exercise each driver. Pairing a board with known-good software with a board running a development build will allow you to test for regressions in communication protocols, etc.
Specific test strategies depend on the hardware you wish to test. For example, ADCs can be tested by presenting a known waveform and converting a series of samples, then checking for the proper range, frequency, average value, etc.
I have found this type of testing to be very valuable in the past, allowing me to confidently modify and improve driver code without fear of breaking existing applications.
A: Sure. In the automotive industry we use $100,000 custom built testers for each new product to verify the hardware and software are operating correctly.
The developers, however, also build a cheaper (sub $1,000) tester that includes a bunch of USB I/O, A/D, PWM in/out, etc and either use scripting on the workstation, or purpose built HIL/SIL test software such as MxVDev.
Hardware in the Loop (HIL) testing is probably what you mean, and it simply involves some USB hardware I/O connected to the I/O of your device, with software on the computer running tests against it.
Whether it's worth it depends.
In the high reliability industry (airplane, automotive, etc) the customer specifies very extensive hardware testing, so you have to have it just to get the bid.
In the consumer industry, with non complex projects it's usually not worth it.
With any project where there's more than a few programmers involved, though, it's really nice to have a nightly regression test run on the hardware - it's hard to correctly simulate the hardware to the degree needed to satisfy yourself that the software testing is enough.
The testing then shows immediately when a problem has entered the build.
Generally you perform both black box and white box testing - you have diagnostic code running on the device that allows you to spy on signals and memory in the hardware (which might just be a debugger, or might be code you wrote that reacts to messages on a bus, for instance). This would be white box testing where you can see what's happening internally (and even cause some things to happen, such as critical memory errors which can't be tested without introducing the error yourself).
We also run a bunch of 'black box' tests where the diagnostic path is ignored and only the I/O is stimulated/read.
For a much cheaper setup, you can get $100 microcontroller boards with USB and/or ethernet (such as the Atmel UC3 family) which you can connect to your device and run basic testing.
It's especially useful for product maintenance - when the project is done, store a few working boards, the tester, and a complete set of software on CD. When you need to make a modification or debug a problem, it's easy to set it all back up and work on it with some knowledge (after testing) that the major functionality was not affected by your changes.
-Adam
A: Yes, I do this, although I've always had a serial port available for test I/O.
It is frequently difficult to leave the unit totally unmodified. Some tests require a line commented out or a call added e.g. to deal with a watchdog.
IMHO, this is better than no unit testing at all. And of course you need to be doing complete integration/system testing, too.
A: Yes. I have had success, but it is not a stragiht-forward problem to solve. In a nutshell here is what my team did:
*
*Defined a variety of unit tests using a home-built C unit-testing framework. Basically, just a lot of macros, most of which were named TEST_EQUAL, TEST_BITSET, TEST_BITVLR, etc.
*Wrote a boot code generator that took these compiled tests and orchestrated them into an execution environment. It's just a small driver that executes our normal startup routine - but instead of going into the control loop, it executes a test suite. When done, it stores the last suite to run in flash memory, then it resets the CPU. It will then run then next suite. This is to provide isolation incase a suite dies. (However, you may want to disable this to make sure your modules cooperate. But that's an integration test, not a unit test.)
*Individual tests would log their output using the serial port. This was OK for our design because the serial port was free. You will have to find a way to store your results if all your IO is consumed.
It worked! And it was great to have. Using our custom datalogger, you could hit the "Test" button, and a couple minutes later, you would have all the results. I highly recommend it.
Updated to clarify how the test driver works.
A: Unit testing embedded projects is quite diffucult, as it usually requires a external stimulus and external measurment.
We have been successful in developing a external serial protocol (either rs232 or udp or tcpip messages) with basic commands for exercising the hw with debug logging in the low level drivers looking for erroneous conditions or even slightly abnormal conditions(espcially for limit checking)
But once developed we then can run the testing after every build if required. It will definitly allow you to deliver a better quality product.
A: If your goal is manufacturing test (ensuring that the modules are properly assembled, no inadvertent shorts/opens/etc), you should focus first on testing cables and connectors, followed by socketed and soldered connections, then the PCB itself. These items can all be tested for shorts & opens by finding access patterns that drive each individual line high while its neighbors are low and vice-versa, then reading back the lines' values.
Without knowing more details of your hardware it's difficult to be more specific, but most embedded processors can set I/O pins to a GPIO mode that simplifies this sort of testing.
If you are not performing bed-of-nails testing on your PCAs, this testing should be considered a mandatory first step for newly manufactured boards.
A: I know this is old now, but maybe it will help. Yes, you can do it but it depends on how much you want to invest in the solution you want. More than two years I have worked on test and validation for the MCAL layer of AUTOSAR. This is kind of the lowest you can get when it comes to software testing. It was a sort of component level testing. Some may call it unit level but it was slightly higher than that because we were testing the APIs of the MCAL components. Things like: ADC, SPI, ICU, DIO and so on.
The solution used involved:
- a test framework that was running on the target micro
- a dSPACE box to provide and read signals to and from the target when required
- XCP access through Vector CANape to trigger the test execution and results collection
- a python framework to perform the test control and validation of the results
The test cases were written in C and they were flashed on the target along with the software under test. It was a black box test cause we didn't alter in any way the implementation of the MCAL. And I think not even the startup sequence was touched. An Idle task was used to continuously check the state of a flag that was the signal to start executing a test. A 10 ms task was used to actually run the test. A test case was in fact a switch case. Every case in this switch was a test step. Python was triggering the test execution at the test step level. A good thing with this approach was the reusing of steps with different parameters. This test control, what to execute and how, was done by Python through a test control data structure acting as an API in between the test implementation and the test triggering and evaluation mechanism. This is what CANape was used for. To set the test to be executed and to read the results of the test. Every value obtained by a test step was stored in an array part of the data structure. The test step itself wasn't involved in any validation because the target was considered a non trust-able component of the test environment. The validation was done by Python based on the test specifications. Python was parsing these specifications and was able to automatically create test triggering scripts including the validation criteria for every test step. The specification of every test case was a series of test steps descriptions together with their validation criteria. Some of these steps were dSPACE related steps. As an example, one step was initializing something and was calling for some capturing some edges on an already configured channel, and the next step was applying the signal on that channel by commanding the dSPACE equipment.
A cheaper solution would involve using an in-house board instead of the dSPACE equipment. To some extent, even a programmable signal generator can be used, but that would not help if you need to validate signals output-ed by the target.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
}
|
Q: Should unit test classes be kept under version control with the rest of the code? If I create a test suite for a development project, should those classes be kept under version control with the rest of the project code?
A: Yes, all the same reasons you put production code in to source control still apply to any unit tests you write.
It's the classic who, where and why questions:
*
*Who changed the code?
*When did they change it?
*What did they change it for?
These questions are just as pertinent to testing code as they are to production code. You absolutely should put your unit testing code in to the repository.
A: Absolutely. Test classes must stay up-to-date with the code. This means checking it in and running the tests under continuous integration.
A: Yes, there is no reason not to put them in source control. What if the tests change? What if the interfaces change, necessitating that the tests change?
A: Absolutely! Test classes are source code and should be managed like any other source code. You will need to modify them and keep track of versions and you want to know the maintenance history.
You should also keep test data under source control unless it is massively large.
A: Unit tests should be tied to a code base in your repository.
For no other reason than if you have to produce a maintenance release for a previous version, you can guarantee that, by the metric of your unit tests, you code is no worse than it was before (and hopefully is now better).
A: Indeed yes. How could anyone ever think otherwise?
If you use code branches, you should try and make your testing code naturally fit under the main codeline so when you branch, the right versions of the tests branch too.
A: Yes they should. People checking out the latest release should be able to unit test the code on their machine. This will help to identify missing dependencies and can also provide them with unofficial documentation on how the code works.
A: Yes.
Test code is a code. It should be maintained, refactored, and versioned. It is a part of your system source.
A: Absolutely, they should be treated as first class citizens of your code base. They'll need all the love and care ie maintenance as any piece of code does.
A: Yes they should. You should be checking the tests out and running them whenever you make code changes. If you put them somewhere else that is that much more trouble to go through to run them.
A: Yes. For all of the other reasons mentioned here, plus also the fact that as functionality changes, your test suite will change, and it should be easy to get the right test suite for any given release, branch, etc. and having the tests not only in version control but the same repository as your code is the way to achieve that.
A: Absolutely. You'll likely find that as your code changes your tests may need to change as well, so you'll likely want to have a record of those changes, especially if the tests or code all of a sudden stop working. ;-)
Also, the unit testcases should be kept as close as possible to the actual code they are testing (the bottom of the same file seems to be the standard). It's as much for convenience as it is for maintenance.
For some additional reading about what makes a good unit test, check out this stackoverflow post.
A: Yes for all the reasons above also if you are using a continuous integration server that is "watching" your source control you can have it run the latest unit tests on every commit.
This means that a broken build results from unit tests failing as well as from code not compiling.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: no respond_to block in edit action (generated with scaffold)? Does anyone know why there is no respond_to block for generated edit actions? Every other action in typical scaffold controllers has a respond_to block in order to output html and xml formats. Why is the edit action an exception?
I'm using the latest version of Ruby on Rails (2.1.1).
A: Somewhat related. Some may wonder why the rails scaffolding for the new action still has a respond_to block; whereas the edit action does not. This is because a request to something like:
GET /my_models/new.xml
...gives back an XML template that can be used to create a new model.
A: Rails handles the 99% case: It's fairly unlikely you'd ever need to do any XML or JSON translations in your Edit action, because non-visually, the Edit action is pretty much just like the Show action. Nonvisual clients that want to update a model in your application can call the controller this way
GET /my_models/[:id].xml (Show)
Then, the client app can make any transformations or edits and post (or put) the results to
PUT /my_models/[:id].xml (Update)
When you call this, you usually are doing it to get an editable form of the Show action:
GET /my_models/[:id]/edit
And it is intended for human use. 99% of the time, that is. Since it's unusual to transform the data in the Edit action, Rails assumes you aren't going to, and DRYs up your code by leaving respond_to out of the scaffold.
A: Because the edit action will only be called from HTML
There is no need for the edit form to be returned in an XML context.
Using REST, you simply make a put call directly to update with the relevant information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: What are your experiences with Windows Workflow Foundation? I am evaluating WF for use in line of business applications on the web, and I would love to hear some recent first-hand accounts of this technology.
My main interest here is in improving the maintainability of projects and maybe in increasing developer productivity when working on complex processes that change frequently.
I really like the idea of WF, however it seems to be relatively unknown and many older comments I've come across mention that it's overwhelmingly complex once you get into it.
If it's overdesigned to the point that it's unusable (or a bad tradeoff) for a small to medium-sized project, that's something that I need to know.
Of course, it has been out since late 2006, so perhaps it has matured. If that's the case, that's another piece of information that would be very helpful!
Thanks in advance!
A: We had a project at work that I was involved in using Workflows.
The idea (from management), was that us programmers would write the Workflow Activities along with the "engine" and framework. Then non-programmers would take care of all the rest by compiling their own Workflows into dlls which the engine would automatically load.
Management was sold on this idea of non-programmers using Workflow to help develop software, and it was pretty much a complete waste of time. The problem we were trying to solve with this project was relatively complex and we knew from the very beginning that the software would have to be modified almost constantly (its calculations were dependent on other companies and governements).
The end result was that we were unable to make the Workflow modules generic enough for anyone else to use. So the programmers were the ones who were forced to work with the Workflows, and all the Workflows did was get in our way.
A: I've been using Workflow 4.0 for the last few months and although mostly impressed, I've found it extremely hard to learn.
For the most recent version (that comes with .NET 4.0 RC), there is next-to-no documentation on the web, in any books or no training courses available. I've only found articles relating to the now defunct 3.0 version. Even the MSDN documenation is light on the ground.
The workflow designer is not as intuitive as it should be by any means so learning is very hard. I've had to rely on answers from a single person on StackOverflow (thanks by the way Maurice!) - and I would be stuffed without his help.
So in summary, I think it has potential but you would be quite mad to learn it yet - wait for more training, documentation and books otherwise you will be going into it blind!
A: Last year we completed a working application with WF, now used as the backbone of an unbelievably huge system which is used by a very big bank for its mortgage process. The pe process has many steps starting from customer application to approval of credit.
Although it was a success, there were so many problems and crisis all along the way. And it wont worth the trouble for any smaller size projects.
A: I consider MS WF as a low-level workflow library rather than a fully fledged enterprise workflow product such as K2. It will enable you to build a workflow enabled application, but is not in itself a workflow application. My experiance of it in this capacity has been positive, although we have had to build a lot of our own infrastructure around it (a pub/sub framework, a worlkflow lifetime manager etc). A lot of the documentation out there is fairly simplistic and does not cover building up an enterprise workflow application based on MS WF.
A: Hard to learn. Quite flexible. Not to be confused with a visual tool for end users, only for programmers. Not sure if I like the dependancy property approach.
A: Windows Workflow Foundation is a very capable product but still very much in its 1st version :-(
The main reasons for use include:
*
*Visually modeling business requirements.
*Separating your business logic from the business rules and externalizing rules as XML files.
*Separating your business flow from your application by externalizing your workflows as XML files.
*Creating long running processes with the automatic ability to react if nothing has happened for some extended period of time. For example an invoice not being paid.
*Automatic persistence of long running workflows to keep resource usage down and allow a process and/or machine to restart.
*Automatic tracking of workflows helping with business requirements.
WF comes as a library/framework so most of the time you need to write the host that instantiates the WF runtime. That said, using WCF hosted in IIS is a viable solution and saves a lot of work. However the WCF/WF coupling is less than perfect and needs some serious work. See here http://msmvps.com/blogs/theproblemsolver/archive/2008/08/06/using-a-transactionscopeactivity-with-a-wcf-receiveactivity.aspx for more details. Expect quite a few changes/enhancements in the next version.
WF (and WCF) are pretty central to a lot of the new stuff coming out of Microsoft. You can expect some interesting announcements during the PDC.
BTW keeping multiple versions of a workflow running takes a bit of work but that is mostly standard .NET. I just did a series of blog posts on the subject starting here: http://msmvps.com/blogs/theproblemsolver/archive/2008/09/10/versioning-long-running-workfows.aspx
About visually modeling business requirements.
In theory, this works quite well with a separation of intent and implementation. However, in practice, you will drop quite a few extra activities on a workflow purely for technical reasons, and that sort of defeats the purpose as You have to tell a business analyst to ignore half the shapes and lines.
A: It really depends on what you want to do with it. I've only used it a little, but compared to more mature products like MetaStorm (I know technically it's a BPM, but there is still a workflow component), Process Choriographer and IBM MQ workflow, there's no comparison. It's just not mature enough. On the other hand it's free where the others are not and can probably get the job done. I don't know if I would place a multi-million dollar operation on it, but with smaller ones, I'd give it another shot. The real hurdle you are going to face is the change in thought process it requires. If you don't have developers that have worked with state systems before that can be a real hurdle.
A: Related question: When to use Windows Workflow Foundation? My answer there:
You may need WF only if any of the
following is true:
*
*You have a long-running process.
*You have a process that changes frequently.
*You want a visual model of the process.
For more details, see Paul Andrew's
post: What to use Windows Workflow Foundation for?
Please do not confuse or relate WF
with visual programming of any kind.
It is wrong and can lead to very bad
architecture/design decisions.
So, if you have such requirements, then WF is a good candidate. Of course it is relatively complex, but mention that the problems that is trying to solve is also complex (and sometimes very complex). IMHO, it is very complex for example to dehydrate/rehydrate objects that have event handlers attached (with events that can be triggered when the object is not in memory).
I can not judge what you mean by "small to medium-sized project", but in general I would say that if your project has at least two requirements from the above list, then you can consider WF as a solution.
A: We've used WF in a large-ish SharePoint application and I can say it's OK. It has lots of power and flexibility. and, as Kevin mentions, once you grok the underlying concepts of workflows, you can do pretty much anything you want with it.
On the other hand, it has some really serious issues, like lack of versioning, which can really hurt your application in the future. We've been forced to deploy up to 3 parallel versions of the same workflow named xxx-v1, xxx-v2 and xxx-v3 to keep older instances running and have new instances use the updated versions. A real pain in the ass. Oh, and there are also some really non-intuitive concepts in there (correlation tokens, wtf??)
A: Brian, I can't reply to your comment, but anyway, by versioning i mean making changes to the underlying code of the workflow without breaking already running instances, and gracefully applying updates to existing workflows. I'm not sure about 'stock' WF, but at least in SharePoint environment there's no concept of workflow versions so new versions have to be deployed as completely different workflows which becomes a maintenance nightmare.
This has nothing to do with 'rehydration', rehydration is the process by which you bring a 'dormant' workflow back to activity after some event or change in state. That is handled transparently by the workflow runtime.
A: WF ist integrated into SharePoint (WSS 3.0), and i have created quite a few workflows for various SharePoint-Websites, so i can tell about my experience of WF in SharePoint. Compared with other workflow-frameworks WF scores well. It's stable (i haven't experienced any mysterious errors), workflows are fairly easy to design (thank to the workflow-designer in Visual Studio) and you can use not only sequential but also state-machine workflows.
It's not perfect, of course, and a developer will definitly need some time to understand the concept (of i.e. the Activity Model); but it's definitely useable - even for "small tasks".
A: Never tried WFF, but I remember reading this article about WFF by Leon Bambrick where he basically says the whole genre of software development tools is nonsense. Might help you decide one way or the other.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
}
|
Q: Strategies for Caching on the Web? What concerns, processes, and questions do you take into account when deciding when and how to cache. Is it always a no win situation?
This presupposes you are stuck with a code base that has been optimized.
A: I have been working with DotNetNuke most recently for web applications and there are a number of things that I consider each time I implement caching solutions.
*
*Do all users need to see cached content?
*How often does each bit of content change?
*Can I cache the entire page?
*Do I need a manual way to purge the cache?
*Can I use a single cache mechanism for the entire site, or do I need multiple solutions?
*What impacts occur if informaiton is somehow out of date?
A: I would look at each feature of your website/application a decided for each feature:
*
*Should it be cached?
*How long should it be cached for?
*When should the cache be expunged?
I would personally go against caching whole pages in favour of caching sections of the website/application.
A: First off, if your code is optimized as you said, you will only see noticable performance benefits when the site is being hammered with a lot of requests.
However, It is faster to pull resources from RAM than from the disk, so your web server will be able to handle more requests if you have a caching strategy in place.
As for knowing when you're going to need caching, consider that even low end modern web servers can handle hundreds of requests per second, so unless you expect a decent amount of traffic, caching is probably something you can just skip.
Also, if you are pulling content from your database (for example, StackOverflow probably does this) caching can be very helpful because database operations are relatively expensive and can be a huge bottleneck in high-volume situations.
As for a scenario when it's not appropriate to cache or when caching becomes difficult... If you try to cache a dynamic page that, say, displays the current date and time, you will constantly see an old date/time unless you get a little more involved with your caching strategy. So that's something to think about.
A: What language are you using? With ASP you have some very easy caching with only adding some property tag over the method and the value is cached depending of the time.
If you want more control over the cache, you can use some popular system like MemCached and have a control with time or by event.
A: Yahoo for example "versions" their JavaScript, so your browser downloads code-1.2.3.js and when a new version appears they reference that version. By doing this they can make their Javascript code cacheable for a very-very long time.
As for the general answer I think it depends on your data, on how often does it change. For example, images don't change very often, but html pages do. The "About us" page doesn't change too often, but the news section does.
A: You can cache by time. This is useful for data that change fast. You can set time for 30 sec or 1 min. Of course, this require some traffic. More traffic you have, more you can play with the time because if you have 1 visit every hour, this visit will be populate the cache and not using it...
You can cache by event... if your data change, you update the cache... this is one very useful if the data need to be accurate for the user very fast.
You can cache static content that you know that won't change ofen. If you have a top 10 of the day that refresh every day, than you can stock all in the cache and update every day.
A: Where available, look out for whole object memory caching. In ASPNET, this is a built-in feature where you can just plant your business logic objects in the IIS Application and access them from there.
This means you can store everything you need to generate a page in memory (persisting writes to database) and generate a page without ANY database IO.
You still need to use the page-building logic to generate the page, but you save a lot of time in getting the data.
Other techniques involve localised output caching, where you capture the output before sending and save it to file. This is great for static sections (like navigation on certain pages, or text bodies) and include them out when they're requested. Most implementations purge cached objects like this when a write happens or after a certain period of time.
Then there's the least "accurate": whole page caching. It's the highest performer but it's pretty useless unless you have very simple pages.
A: What kind of caching? Server side caching? Client side caching?
Client side caching is a no-brainer with certain things, like Static HTML, SWFs and images. Figure out how often the assets are likely to change, and set up "Expires" headers as appropriate. (2 days? 2 weeks? 2 months?)
Dynamic pages, by definition, are a little harder to cache. There have been some explorations in caching of certain chunks using Javascript (and degrading to IFrames if JS is not available.) This however, might be a little more difficult to retrofit into an existing site.
DB and application level caching may, or may not work, depending on your situation. That really depends on where your bottlenecks are. Figuring out where your application spends the most time on page-rendering is probably priority 1, then you can start looking at where and how to cache.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: When should I use # and = in ASP.NET controls? I have been using ASP.NET for years, but I can never remember when using the # and = are appropriate.
For example:
<%= Grid.ClientID %>
or
<%# Eval("FullName")%>
Can someone explain when each should be used so I can keep it straight in my mind? Is # only used in controls that support databinding?
A: Here's a great blog post by Dan Crevier that walks through a test app he wrote to show the differences.
In essence:
*
*The <%= expressions are evaluated at render time
*The <%# expressions are evaluated at DataBind() time and are not evaluated at all if DataBind() is not called.
*<%# expressions can be used as properties in server-side controls. <%= expressions cannot.
A: There are a couple of different 'bee-stings':
*
*<%@ - page directive
*<%$ - resource access
*<%= - explicit output to page
*<%# - data binding
*<%-- - server side comment block
Also new in ASP.Net 4:
*
*<%: - writes out to the page, but with HTML encoded
Also new in ASP.Net 4.5:
*
*<%#: - HTML encoded data binding
A: <%= %> is the equivalent of doing Response.Write("") wherever you place it.
<%# %> is for Databinding and can only be used where databinding is supported (you can use these on the page-level outside a control if you call Page.DataBind() in your codebehind)
Databinding Expressions Overview
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: How do I do table sorting using CodeIgniter? I've been developing a site over the past few weeks using CodeIgniter as the framework. I've been thinking of the best way to accomplish something, which in a lot of other frameworks in other languages is relatively simple: sortable tables. CodeIgniter switches off query strings by default, because your URLs contain method parameters. So a URL might look like:
/controller/method/param1/param2
You might think that you could just add in sortBy and sortOrder as two additional parameters to the controller method. I don't particularly want to do that, mainly because I want to have a re-usable controller. When you use query string parameters, PHP can easily tell you whether there is a parameter called sortBy. However, when you're using URL based parameters, it will vary with each controller.
I was wondering what my options were. As far as I can see they are something like:
*
*Pass in my sortBy and sortOrder parameters, just suck it up, and develop some less-than-reusable component for it.
*Have an additional controller, which will store the sortBy and sortOrder in the session (although it would have to know where you came from, and send you back to the original page).
*Have some kind of AJAX function, which would call the controller above; then reload the page.
*Hack CodeIgniter to turn query strings back on. Actually, if this is the only option, any links to how to do this would be appreciated.
I just can't quite believe such a simple task would present such a problem! Am I missing something? Does anyone have any recommendations?
While I love jQuery, and I'm already using it on the site, so TableSorter is a good option. However, I would like to do server-side sorting as there are some pages with potentially large numbers of results, including pagination.
A: If you're OK with sorting on the client side, the Tablesorter plugin for jQuery is pretty nice.
A: I ran into this with a fairly complex table. The hard part was that the table could grow/shrink depending on certain variables!! Big pain :(
Here's how I handled it..
Adjusted system/application/config/config.php to allow the comma character in the URI:
$config['permitted_uri_chars'] = 'a-z 0-9~%.:_\-,';
Adjust my controller with a sorting function:
function sorter() {
//get the sort params
$sort = explode(",",$this->uri->segment(3)); //the 3rd segment is the column/order
//pass the params to the model
$data = $this->model_name->get_the_data($sort[0],$sort[1]);
$this->_show($data);
}
function _show($data) {
//all the code for displaying your table
}
I've oversimplified, but you get the idea. The purpose is to have a url like this:
/controller/sorter/columnname,sortorder
The sorter function calls another internal function to deal with the display/template/view logic - it's job is to deal with the sorting call and get the appropriate data from the model.
Of course, this could be reduced to just your current function:
function showGrid() {
$sort = $this->uri->segment(3);
if ($sort) {
//get the data sorted
} else {
//get the data the default way
}
//rest of your view logic
}
That way, you don't even need a separate function - and can use the third segment to define your sorting.
A: I recently added this Table sorter (which uses Prototype) to a bunch of my pages. It's fast and pretty easy to implement.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Does it matter which vendor's JDK you build with? If I'm deploying to servers with WebSphere 6.1 (Java 1.5), should I use IBM's JDK on my build box? Or will Sun's JDK compile to the same binary?
If I should use IBM's, where can I get the Windows x64 version?
A: I would as much as possible try to keep development as close to production as possible. Ibm and Sun's JDK's certainly both satisfy the SDK certification, but they are by no means identical. Their instrumentation and memory management are at least slightly different. If nothing else, the bugs in the JDK will be different, which your code may only trip over in one scenario vs. another. It'll also probably only happen at 4 am, and when the moon is full especially when you have company over.
I can't tell you where to get IBM's jdk, but if you've got a license to websphere at your company, you should have a contact at IBM to get you a link to that JDK.
Good luck, and always try to minimize differences where possible.
A: It should not make any difference. It will probably not be exactly the same binary but 100% compatible. I assume you're using external libraries anyways like log4j or maybe hibernate or whatever and those are not built using the IBM JDK.
There are differences in the JREs, however. For example, I remember that when I listed methods or fields of a class using reflection, the IBM JRE used to give them to me in a different order than the Sun one.
A: I would use the same JDK to build that is going to be used when the application is deployed (if you have control over that).
The binaries may be different if the compiler is different, but they should be semantically identical. I don't know if IBM wrote its own compiler. The JRockit JDK actually uses the Sun compiler but the JVMs are different. So with JRockit the binaries are identical.
If the application is used with different JDKs at run time, I would still build with the one that you think will be used at deployment time most of the time and do some runtime testing with different JDKs.
A: The IBM JDK ships with J9 VM and SUN JDK runs on Hotspot VM which have different algorithms to function. You application may not perform the same if you deploy and tune in SUN JDK and your production uses IBM JDK for WAS. Check with the vendors and open a ticket,let us know how it goes.
A: Compiling with any JDK should not cause a problem unless you are referencing classes outside of the java.* and javax.* pacakges (which you should not be.) Of course there's always a chance that there's a discrepancy between a given vendor's JDK and the spec which could cause some really weird runtime errors that are hard to track down, but I've never seen this before in my experience.
I would recommend running any test suites you have using the target JRE as runtime behaviors differ between vendors much more often than compilation semantics do.
A: JDKs are compiling your code to a bytecode and not directly to machine code. Is expected that compilers of different vendors generating cross-vendor compatible code. For example IBMs compiler for JDK1.5 will produce code that runs on SUN's JDK 1.5 and later without any problem.
Another issue is how compilers optimizing the bytecode, I have not information that some compilers performing better optimization than others. The largest part of optimization is performed during runtime by JVM (for example JIT (just-in-time) or AOT (ahead-of-time) strategies).
A: Having worked with WebSphere a long time the version of the JDK is very important. WebSphere 6.1 ships with an IBM JDK 1.5 (or is it 5). When you patch the WebSphere there are equivalent patches for the JDK as well. While it may work with a different version of the JDK (even a different vendor) I doubt you will get much support from IBM if something goes wrong.
If you need a 64 bit JVM I would suggest that there is probably a 64 bit build, while I cannot comment on windows specifically I can tell you there is a 64 bit WebSphere 6.1 build for both AIX and Linux.
The best answer is to check with the vendor and see if they will support your configuration. What you don't want to do is get it working, then have an issue in live, call up support and find out that you have an unsupported environment.
A: They should compile to the same bytecode specification, although they may compile different bytecode (much as in the same way different C compilers generate different machine code). I don't think there would be any problems in running the resulting code - I've compiled Java 1.4 on a Mac and then deployed to IBM's J9 running on a PocketPC before with no problems (this was before J9 could handle Java 5 bytecode).
Regardless, I'd definitely make your compilation platform a bullet point on your readme file so that your client can see if it is a problem.
Alternatively, you could build and deploy with ANT, and use Sun's JDK with ANT.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Does the MVC pattern describe Roles or Layers? I read a text recently saying the MVC pattern describes the layers in an application. But personally I see MVC showing several key roles in an application.
Which word do you think is better, layer or role, to describe the three main pieces of MVC?
A: Layers should imply a very narrow coupling between the respective sets of code. MVC involves relatively tight coupling between the model, view, and controller. Therefore, if you characterize this as a layering pattern, it becomes problematic in terms of defining an API between the layers. To do this properly, you would have to implement some unintuitive patterns.
Because of this, I would agree with your tendency to view it as a pattern that defines roles within a single layer.
A: I think roles is a better description. The view and the controller are both in the same "layer" and usually the model is described as a layer but is used between layers.
Usually my applications are centered around the domain model with stuff like presentation, persistence and file-io around it. Thinking about an architecture as layered doesn't really work for me.
A: MVC clearly defines ROLES. these are 3 roles you can implement in any number of layers. For example u can have a multi layer controller
A: Roles, not layers. Layers are completely dependent on the underlying implementation of the MVC pattern. For instance, a service layer may be a single layer on one implementation, but it could have a web service remoting layer and a database layer (for two differing service layers) on another implementation. The concept of layers is just to help you organize it, as is the pattern, but layers are not as easy to spot as patterns, and layers can change, whereas the pattern remains the same despite the layers changing due to different implementations.
A: You cannot compare those two words, because they describe different concepts.
To me, a layer is something opaque that offers some functions I can use to do things. For example, a good hardware layer for a wireless transmitter would just give me a send and a receive-function (based on bytes, for example), hiding all the ugly, ugly details from me.
A role is a way an object will behave. For example, a transformation in one of my compilers is going to take an abstract syntaxtree and return an abstract syntaxtree or an affection in my current project is going to take a state-difference and return a specifically altered state-difference.
However, coming with those two definitions, I do not see the need to chose a single "correct" term and burn the other as wrong, because they don't conflict much. A part of a layer has a certain role, and a set of objects conforming to certain roles form a layer. Certainly, the controller forms a certain layer between the UI and the model (at least for input), however, ot also has a role - it turns certain event into certain other events (and thus, it is some sort of adapter).
A: I think either can be reasonably argued for, but I think describing the parts as "layers" is more consistent with other conventions, like the OSI model. Since the View, Controller, and Model get progressively closer to your data, it's more of a layered structure. It seems that "roles" would apply to different parts of an application on the same layer.
A: Why not Both? I see it as 3 separate layers implementing 3 different roles.
A: It's all terminology, but I think the correct software architecture term would be "layer", as in logical layer. You could use the term "architectural layer" if it is clearer.
The thing is, it's just a different way of slicing an application: a classic n-layer app would be:
*
*UI
*Business Logic
*Persistence
You could have the following logical layers in a simple MVC application:
*
*UI
*Controller
*Model
*Persistence
But you could still talk about the "UI" and "Controller" together as forming the User Interface layer -- I usually split out the Controller into a separate layer when describing and diagramming these architectures, though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to add /usr/share/java libs to webapp's classpath? Summary
Is that possible for webapps deployed on linux + tomcat5.5 to use/see all /usr/share/java/ jars automatically?
Details
I'm packaging my java webapp for Ubuntu (yet the question is related to any linux-based distro) and going to make it depend on tomcat.
I'm going to put context descriptor (an xml file) to /usr/share/tomcat5.5/conf/Catalina/localhost/ to make my app deployed.
Having my web dir here: /usr/share/<appname>/web, how can I enable my app to use java jar libs installed in the system (/usr/share/java)?
I can't just symlink /usr/share/java -> <webdir>/WEB-INF/lib, since I have my custom jars need to be placed in lib dir.
Bad Solution
The solution I've found so far is to symlink each required jar to <webdir>/WEB-INF/lib/.
This is not so good, because I have to symlink a lot of jars and even worse to symlink all jars my direct dependency lib (jar) requires (and so on). In case my direct dependency lib changes its list of required jars I'll have to maintain that symlinks.
A: According to the Tomcat classloading documentation, you need to put any shared libs that should be available to all Tomcat apps in the $CATALINA_BASE/shared/lib library -- so one way to do what you're looking to do is to move your libraries from /usr/share/java to $CATALINA_BASE/shared/lib.
BUt if I'm not misunderstanding that same documentation, Tomcat also makes the system-wide CLASSPATH variable's contents available to the classloader at launch, so if your directory -- /usr/share/java -- were included in the system-wide CLASSPATH variable, then that should work too. I've never done this, though; Tomcat's method of making the contents of $CATALINA_BASE/shared/lib available Tomcat-wide has always served me perfectly.
A: entzik's answer lead me to the following solution.
I'm going to use modified "bad solution" (see question).
Modifications are following:
*
*Depend on specific package version for all dependencies (affects "control" file while packaging for deb)example: libcommons-io-java ( = 1.3.1) instead of just libcommons-io-java
*Symlink to actual jar files in `/usr/share/java` and not "generalized" onesexample: webdir/WEB-INF/lib/commons-io.jar -> /usr/share/java/commons-io-1.3.1.jarand notwebdir/WEB-INF/lib/commons-io.jar -> /usr/share/java/commons-io.jar
This modifications ensure webapp is not broken if administrator installed new version of a library (commons-io for example).
The downside is this approach clearly inflates system with used-by-only-one-app versions of libraries and may lead to problem some other application/library can't install due to version conflict. I guess both potential problems are minor if we are speaking about libraries.
A: You have two options, one is to let the classloader provide the libraries to all java programs and the other is to let the classloader provide the libraries to all tomcat contexts.
Add your symlinks to /usr/lib/jvm/java-1.5.0-sun-1.5.0.11/jre/lib (note you may need to specify a different version in this path) to allow all java programs access to these libraries or add them to Tomcat's shared libraries at var/lib/tomcat5.5/shared/libs (again, the version number may be different) for access by all Tomcat contexts.
I should also note that these directory locations were taken from Ubuntu "Feisty".
A: You should not do that. Java EE applications are the supposed to be self sufficient and not depend on any resources outside the deployment package other than those provided by the container. So you should take the libs you need from that directory and add it to your war or ear package.
This guarantees that your application will behave the same wherever you deploy it and you will not be subject to unexpected changes in the versions of the libs in /usr/share/java....
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Codesmith resources I use Codesmith to create our code generation templates and have had success in learning how to use the tool by looking at example templates and the built in documentation. However I was wondering if there are any other resources (books, articles, tutorials, etc.) for getting a better grasp of Codesmith?
A: We also have a great new collection of video tutorials available. You may want to check those out as well.
A: There is also a Google Code Codesmith section where you can download the latest updates of some CSLA, nHibernate and Plinqo templates.
A: Here is an interesting tutorial for building a data access layer using CodeSmith.
A: Have you checked the codesmith community site
A: Depending on the templates you are using, we might have a separate website with tons of useful information like nettiers.com and plinqo.com. Also check out the help section on our community site.
We have also recently created a new WIKI (http://docs.codesmithtools.com) for all of our documentation.
Thanks
-Blake Niemyjski
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to check whether a file is valid UTF-8? I'm processing some data files that are supposed to be valid UTF-8 but aren't, which causes the parser (not under my control) to fail. I'd like to add a stage of pre-validating the data for UTF-8 well-formedness, but I've not yet found a utility to help do this.
There's a web service at W3C which appears to be dead, and I've found a Windows-only validation tool that reports invalid UTF-8 files but doesn't report which lines/characters to fix.
I'd be happy with either a tool I can drop in and use (ideally cross-platform), or a ruby/perl script I can make part of my data loading process.
A: Use python and str.encode|decode functions.
>>> a="γεια"
>>> a
'\xce\xb3\xce\xb5\xce\xb9\xce\xb1'
>>> b='\xce\xb3\xce\xb5\xce\xb9\xff\xb1' # note second-to-last char changed
>>> print b.decode("utf_8")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.5/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 6: unexpected code byte
The exception thrown has the info requested in its .args property.
>>> try: print b.decode("utf_8")
... except UnicodeDecodeError, exc: pass
...
>>> exc
UnicodeDecodeError('utf8', '\xce\xb3\xce\xb5\xce\xb9\xff\xb1', 6, 7, 'unexpected code byte')
>>> exc.args
('utf8', '\xce\xb3\xce\xb5\xce\xb9\xff\xb1', 6, 7, 'unexpected code byte')
A: How about the gnu iconv library? Using the iconv() function: "An invalid multibyte sequence is encountered in the input. In this case it sets errno to EILSEQ and returns (size_t)(-1). *inbuf is left pointing to the beginning of the invalid multibyte sequence."
EDIT: oh - i missed the part where you want a scripting language. But for command line work, the iconv utility should validate for you too.
A: Here is the bash script to check whether a file is valid UTF-8 or not:
#!/bin/bash
inputFile="./testFile.txt"
iconv -f UTF-8 "$inputFile" -o /dev/null
if [[ $? -eq 0 ]]
then
echo "Valid UTF-8 file.";
else
echo "Invalid UTF-8 file!";
fi
Description:
*
*--from-code, -f encoding (Convert characters from encoding)
*--to-code, -t encoding (Convert characters to encoding, it doesn't have to be specified, it will be assumed to be UTF-8.)
*--output, -o file (Specify output file 'instead of stdout')
A: You can use GNU iconv:
$ iconv -f UTF-8 your_file -o /dev/null; echo $?
Or with older versions of iconv, such as on macOS:
$ iconv -f UTF-8 your_file > /dev/null; echo $?
The command will return 0 if the file could be converted successfully, and 1 if not. Additionally, it will print out the byte offset where the invalid byte sequence occurred.
Edit: The output encoding doesn't have to be specified, it will be assumed to be UTF-8.
A: You can use isutf8 from the moreutils collection.
$ apt-get install moreutils
$ isutf8 your_file
In a shell script, use the --quiet switch and check the exit status, which is zero for files that are valid utf-8.
A: You can also use recode, which will exit with an error if it tries to decode UTF-8 and encounters invalid characters.
if recode utf8/..UCS < "$FILE" >/dev/null 2>&1; then
echo "Valid utf8 : $FILE"
else
echo "NOT valid utf8: $FILE"
fi
This tries to recode to the Universal Character Set (UCS) which is always possible from valid UTF-8.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
}
|
Q: Can I safely edit a renamed file in perforce I have a file I need to move that's already under perforce. Once moved it needs some editing - update the package, etc - appropriate to its new location. Should I submit the move changespec and then reopen it for edit, or can I do this in one go? If so, what is the appropriate sequence of events?
A: I have done this before in one go, but depending on your build process, I recommend against it. What I generally do is this:
*
*Move the file.
*If the move needs a change in order to compile, open it for edit and make those changes.
*Submit the changes, telling perforce to reopen the files for editing.
*Make the changes for path, etc., that don't cause compile errors but should be updated.
*Submit those changes with an appropriate description.
If you want to, however, you could just do all your changes in step (2) above. Perforce might change the flag for the new file from integrate to add, but it still remembers the source path for the file.
Edit: Better method
I realized that I often use a different method, but the idea of "moving" the file distracted me. So, I would recommend these steps instead:
*
*Integrate the file into the new path/name, leaving the previous file there. I am assuming that this won't break your build process.
*Submit the new file, checking it out again for edit after submission.
*Make the required changes to the new file, and to the project so that you are using the new file.
*Submit the edits for the new file.
*[Optional] You might need to check through branch specs to see if you need to map the old file into the new one in any branches.
*Create a changelist for deleting the old file, and submit it sometime later.
This method allows the edits to be cleanly separated from the rename/move, while never leaving the project in a state that won't compile.
Also, why wait for step 6? Sometimes, especially on bigger projects, you might want to move a file that another person is editing. Perforce will helpfully tell you this. By waiting to delete the file, you allow your coworker(s) to finish the edits and submit without needing to move their work manually. After the edits are submitted, they can be integrated into the new file, and then the old one can be safely deleted.
A: Submit the move change and then reopen for edit (you could use the reopen option too).
This is much more readable to the user in the change history.
Also, recent versions of Perforce do perform checks for changes to files after resolution. So, there may be complaints editing files after some resolve operations have been completed.
A: I would say always submit first then edit. It is much cleaner and makes it more obvious whats happening in your repository. Then simply checkout the file in the new location and make whatever changes. This also makes it much more obvious that the changes were made in the new location and to all it to work after renaming.
A: "Safely" is probably an important point here. Once you rename or move the file it'll get a revision number of "1" which looks like a new file to your Perforce client. Of course, admins will be able to get its prior history, but if the editing/version history of the file is important to you it's a little harder to get the older revision.
Update: Thanks to Commodore Jaeger and Greg Whitfield for enlightening comments.
This wasn't easy to track down regarding what the One True Answer is, even from Perforce support, so I figured I'd update everyone on what we found:
*
*Perforce stores all versions of every document in its database.
*If it's saving your file as type <text> or <ktext> then it stores the diffs of one file version to another and not the entire file.
*If you check out a file, make no changes to it, and then re-submit, it will save as a new version with 0 diffs. This is configurable and P4 can be set up to ignore changelist items without any actual diffs. You can force this behavior by selecting "Revert unchanged files..." before you submit a changelist.
*Use "Rename/Move..." to move files in P4 so it can track them. Don't copy them using Windows Explorer and then re-add them in P4.
*If you use the "Rename/Move..." function from the context menu, the "new" file will show a revision number of "1" as though it were a new file.
*However, since P4 saves every function performed on a file, you can actually get to any previous revision (and even recover "deleted" files) with the CLI command p4 filelog -i
*If you want to get to the revision history of a moved or renamed file and you're not an admin, you can right-click and select its "Revision Graph" which shows every version of a file even when moved between branches.
According to Perforce support, easier tracking of revision history through branch or folder moves is an oft-requested feature and is in their current roadmap.
Perforce's answer: At the moment, there isn't a way to move/rename/integrate files and still maintain the exact file history.
However, if you were to choose "Integrate..." by right-clicking on the folder that you want to share, the versions of the files of the newly branched folder and underlying files will start from revision #1, but the integration history between the branched folder and underlying files and the original folder and underlying files will remain through which you can trace the revision history of the files.
A: Yes you can. Simply reopen for edit the branched file (i.e. the new one). In P4Win, there is a context menu for this ("re-open for edit").
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to disable a programmatical breakpoint / assert? I am using Visual Studio, developing a native application, I have a programmatical breakpoint (assert) in my code placed using __asm int 3 or __debugbreak. Sometimes when I hit it, I would like to disable it so that successive hits in the same debugging session no longer break into the debugger. How can I do this?
A: You might try something like this:
#define ASSERT(x) {\
if (!(x)) \
{ \
static bool ignore = false; \
if (!ignore) \
{ \
ignore = true; \
__asm int 3 \
} \
}\
}
This should hit the debug only once. You might even show a messagebox to the user and ask what to do: continue (nothing happens), break (int 3 is called) or ignore (ignore is set to true, the breakpoint is never hit again)
A: x86 / x64
Assuming you are writing x86/x64 application, write following in your watch window:
x86: *(char *)eip,x
x64: *(char *)rip,x
You should see a value 0xcc, which is opcode for INT 3. Replace it with 0x90, which is opcode for NOP. You can also use the memory window with eip as an address.
PPC
Assuming you are writing PPC application (e.g. Xbox 360), write following in your watch window:
*(int *)iar,x
You should see a value 0xfeNNNNNN, which is opcode for trap (most often 0x0fe00016 = unconditional trap). Replace it with 0x60000000, which is opcode for NOP.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: RegisterStartupScript on Logout page I am trying to have a logout page where is displays a messages and then redirects to the login page. This is in ASP.net 2.0.
I have this in my Page_Load:
ClientScript.RegisterStartupScript(typeof(Page), "pageredirect", JavascriptRedirect() );
This is my redirect function:
private string JavascriptRedirect()
{
StringBuilder sb = new StringBuilder();
sb.Append("<script type=\"text/javascript\" language=\"javascript\">");
sb.Append("var x = 5;");
sb.Append("var y = 1;");
sb.Append("function startClock(){");
sb.Append("x = x-y;");
sb.Append("t=setTimeout(\"startClock()\", 1000);");
sb.Append("if(x==0){");
sb.Append("window.location='login.aspx';");
sb.Append("clearTimeout(t);");
sb.Append(" }");
sb.Append(" }");
sb.Append("startClock();");
sb.Append("</script>");
return sb.ToString();
}
When I test there is no javascript in my resulting logout page. Anyone have an idea on what is happening. Is Page_Load to late to register this?
A: By the way: You don't need Javascript to redirect the browser to a page after a certain amount of time. Just use a plain HTML meta Tag in your <HEAD> section.
<meta http-equiv="refresh" content="5; URL=login.aspx">
The number stands for the time in seconds, the URL for the target.
A: Instead of typeof(Page), try typeof(YOURPAGECLASS) -- with YOURPAGECLASS set the name of your page's classname. Page_Load is definitely not too late.
A: Perhaps there is something unusual going on in your execution pipeline?
When I create a new .aspx and paste in the code you're providing in your question, it works as you expect it to (i.e. Javascript is rendered to the client).
A: I would register it like this:
Page.ClientScript.RegisterStartupScript(this.GetType(), "Redirect", "script here", true);
The last 'true' tells ASP.NET to render the script tags and CDATA so you don't have to write it all out.
You may also want to place your javascript in a separate include file so that you don't have to recompile if/when you tweak it. If you do that, then in your page load you'd have:
Page.ClientScript.RegisterClientScriptInclude("RedirectInclude", "scripts/redirect.js");
Page.ClientScript.RegisterStartupScript(this.GetType(), "Redirect", "JavascriptRedirect();", true);
A: I don't undesrtand why it doesn't work, but i can give you a workaround:
declare an asp:literal item in the page. in pageLoad,
yourLiteral.Text = JavascriptRedirecct();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Refactoring Java factory method There's something very unsatisfactory about this code:
/*
Given a command string in which the first 8 characters are the command name
padded on the right with whitespace, construct the appropriate kind of
Command object.
*/
public class CommandFactory {
public Command getCommand(String cmd) {
cmdName = cmd.subString(0,8).trim();
if(cmdName.equals("START")) {
return new StartCommand(cmd);
}
if(cmdName.equals("END")) {
return new EndCommand(cmd);
}
// ... more commands in more if blocks here
// else it's a bad command.
return new InvalidCommand(cmd);
}
}
I'm unrepentant about the multiple exit points - the structure is clear. But I'm not happy about the series of near-identical if statements. I've considered making a Map of Strings to Commands:
commandMap = new HashMap();
commandMap.put("START",StartCommand.class);
// ... etc.
... then using Reflection to make instances of the appropriate class looked up from the Map. However while conceptually elegant, this involves a fair amount of Reflection code that whoever inherits this code might not appreciate - although that cost might be offset by the benefits. All the lines hardcoding values into the commandMap smell almost as bad as the if block.
Even better would be if the factory's constructor could scan the classpath for subclasses of Command, query them for String representations, and automatically add them them to its repertoire.
So - how should I go about refactoring this?
I guess some of the frameworks out there give me this kind of thing for free. Let's assume I'm not in a position to migrate this stuff into such a framework.
A: With the exception of the
cmd.subString(0,8).trim();
part, this doesn't look too bad to me. You could go with the Map and use reflection, but, depending on how often you add/change commands, this might not buy you much.
You should probably document why you only want the first 8 characters, or maybe change the protocol so it's easier to figure out which part of that string is the command (e.g. put a marker like ':' or ';' after the command key-word).
A: Its not directly an answer to your question, but why don't you throw an InvalidCommandException (or something similar), rather then returning an object of type InvalidCommand?
A: Unless there is a reason they can't be I always try to make my command implementations stateless. If that's the case you can add a method boolean identifier(String id) method to your command interface which would tell whether this instance could be used for the given string identifier. Then your factory could look something like this (note: I did not compile or test this):
public class CommandFactory {
private static List<Command> commands = new ArrayList<Command>();
public static void registerCommand(Command cmd) {
commands.add(cmd);
}
public Command getCommand(String cmd) {
for(Command instance : commands) {
if(instance.identifier(cmd)) {
return cmd;
}
}
throw new CommandNotRegisteredException(cmd);
}
}
A: I like your idea, but if you want to avoid reflection you could add instead instances to the HashMap:
commandMap = new HashMap();
commandMap.put("START",new StartCommand());
Whenever you need a command, you just clone it:
command = ((Command) commandMap.get(cmdName)).clone();
And afterwards, you set the command string:
command.setCommandString(cmdName);
But using clone() doesn't sound as elegant as using reflection :(
A: How about the following code:
public enum CommandFactory {
START {
@Override
Command create(String cmd) {
return new StartCommand(cmd);
}
},
END {
@Override
Command create(String cmd) {
return new EndCommand(cmd);
}
};
abstract Command create(String cmd);
public static Command getCommand(String cmd) {
String cmdName = cmd.substring(0, 8).trim();
CommandFactory factory;
try {
factory = valueOf(cmdName);
}
catch (IllegalArgumentException e) {
return new InvalidCommand(cmd);
}
return factory.create(cmd);
}
}
The valueOf(String) of the enum is used to find the correct factory method. If the factory doesn't exist it will throw an IllegalArgumentException. We can use this as a signal to create the InvalidCommand object.
An extra benefit is that if you can make the method create(String cmd) public if you would also make this way of constructing a Command object compile time checked available to the rest of your code. You could then use CommandFactory.START.create(String cmd) to create a Command object.
The last benefit is that you can easily create a list of all available command in your Javadoc documentation.
A: Your map of strings to commands I think is good. You could even factor out the string command name to the constructor (i.e. shouldn't StartCommand know that its command is "START"?) If you could do this, instantiation of your command objects is much simpler:
Class c = commandMap.get(cmdName);
if (c != null)
return c.newInstance();
else
throw new IllegalArgumentException(cmdName + " is not as valid command");
Another option is to create an enum of all your commands with links to the classes (assume all your command objects implement CommandInterface):
public enum Command
{
START(StartCommand.class),
END(EndCommand.class);
private Class<? extends CommandInterface> mappedClass;
private Command(Class<? extends CommandInterface> c) { mappedClass = c; }
public CommandInterface getInstance()
{
return mappedClass.newInstance();
}
}
since the toString of an enum is its name, you can use EnumSet to locate the right object and get the class from within.
A: Taking a Convetion vs Configuration approach and using reflection to scan for available Command objects and loading them into your map would be the way to go. You then have the ability to expose new Commands without a recompile of the factory.
A: Another approach to dynamically finding the class to load, would be to omit the explicit map, and just try to build the class name from the command string. A title case and concatenate algorithm could turn "START" -> "com.mypackage.commands.StartCommand", and just use reflection to try to instantiate it. Fail somehow (InvalidCommand instance or an Exception of your own) if you can't find the class.
Then you add commands just by adding one object and start using it.
A: One option would be for each command type to have its own factory. This gives you two advantages:
1) Your generic factory wouldn't call new. So each command type could in future return an object of a different class according to the arguments following the space padding in the string.
2) In your HashMap scheme, you could avoid reflection by, for each command class, mapping to an object implementing a SpecialisedCommandFactory interface, instead of mapping to the class itself. This object in practice would probably be a singleton, but need not be specified as such. Your generic getCommand then calls the specialised getCommand.
That said, factory proliferation can get out of hand, and the code you have is the simplest thing that does the job. Personally I'd probably leave it as it is: you can compare command lists in source and spec without non-local considerations like what might have previously called CommandFactory.registerCommand, or what classes have been discovered through reflection. It's not confusing. It's very unlikely to be slow for less than a thousand commands. The only problem is that you can't add new command types without modifying the factory. But the modification you'd make is simple and repetitive, and if you forget to make it you get an obvious error for command lines containing the new type, so it's not onerous.
A: Having this repetitive object creation code all hidden in the factory is not so bad. If it has to be done somewhere, at least it's all here, so I'd not worry about it too much.
If you really want to do something about it, maybe go for the Map, but configure it from a properties file, and build the map from that props file.
Without going the classpath discovery route (about which I don't know), you'll always be modifying 2 places: writing a class, and then adding a mapping somewhere (factory, map init, or properties file).
A: Thinking about this, You could create little instantiation classes, like:
class CreateStartCommands implements CommandCreator {
public bool is_fitting_commandstring(String identifier) {
return identifier == "START"
}
public Startcommand create_instance(cmd) {
return StartCommand(cmd);
}
}
Of course, this adds a whole bunch if tiny classes that can't do much more than say "yes, thats start, give me that" or "nope, don't like that", however, you can now rework the factory to contain a list of those CommandCreators and just ask each of it: "you like this command?" and return the result of create_instance of the first accepting CommandCreator. Of course it now looks kind of akward to extract the first 8 characters outside of the CommandCreator, so I would rework that so you pass the entire command string into the CommandCreator.
I think I applied some "Replace switch with polymorphism"-Refactoring here, in case anyone wonders about that.
A: I'd go for the map and creation via reflection. If scanning the class path is too slow, you can always add a custom annotation to the class, have an annotation processor running at compile time and store all class names in the jar metadata.
Then, the only mistake you can do is forgetting the annotation.
I did something like this a while ago, using maven and APT.
A: The way I do it is to not have a generic Factory method.
I like to use Domain Objects as my command objects. Since I use Spring MVC this is a great approach since the DataBinder.setAllowedFields method allows me a great deal of flexibility to use a single domain object for several different forms.
To get a command object, I have a static factory method on the Domain object class. For example, in the member class I'd have methods like -
public static Member getCommandObjectForRegistration();
public static Member getCommandObjectForChangePassword();
And so on.
I'm not sure that this is a great approach, I never saw it suggested anywhere and kind of just came up with it on my own b/c I like the idea of keeping things like this in one place. If anybody sees any reason to object please let me know in the comments...
A: I would suggest avoiding reflection if at all possible. It is somewhat evil.
You can make your code more concise by using the ternary operator:
return
cmdName.equals("START") ? new StartCommand (cmd) :
cmdName.equals("END" ) ? new EndCommand (cmd) :
new InvalidCommand(cmd);
You could introduce an enum. Making each enum constant a factory is verbose and also has some runtime memory cost. But you can eaily lookup an enum and then use that with == or switch.
import xx.example.Command.*;
Command command = Command.valueOf(commandStr);
return
command == START ? new StartCommand (commandLine) :
command == END ? new EndCommand (commandLine) :
new InvalidCommand(commandLine);
A: Go with your gut, and reflect. However, in this solution, your Command interface is now assumed to have the setCommandString(String s) method accessible, so that newInstance is easily useable. Also, commandMap is any map with String keys (cmd) to Command class instances that they correspond to.
public class CommandFactory {
public Command getCommand(String cmd) {
if(cmd == null) {
return new InvalidCommand(cmd);
}
Class commandClass = (Class) commandMap.get(cmd);
if(commandClass == null) {
return new InvalidCommand(cmd);
}
try {
Command newCommand = (Command) commandClass.newInstance();
newCommand.setCommandString(cmd);
return newCommand;
}
catch(Exception e) {
return new InvalidCommand(cmd);
}
}
A: Hmm, browsing, and only just came across this. Can I still comment?
IMHO there's nothing wrong with the original if/else block code. This is simple, and simplicity must always be our first call in design (http://c2.com/cgi/wiki?DoTheSimplestThingThatCouldPossiblyWork)
This seems esp true as all the solutions offered are much less self documenting than the original code...I mean shouldn't we write our code for reading rather than translation...
A: At the very least, your command should have a getCommandString() -- where StartCommand overrides to return "START". Then you can just register or discover the classes.
A: +1 on the reflection suggestion, it will give you a more sane structure in your class.
Actually you could do the following (if you haven't thought about it already)
create methods corresponding to the String you'd be expecting as an argument to your getCommand() factory method, then all you have to do is reflect and invoke() these methods and return the correct object.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Fast search in java swing applications? I'm wandering myself what component is the best for displaying fast search results in swing. I want to create something like this, make a text field where user can enter some text, during his entering I'll improve in back end fast search on database, and I want to show data bellow the text box, and he will be able to browse the results and on pres enter result will be displayed in table. So my question is is there any component which already have this logic for displaying?
Or is it's not, what is the best way to implement that.
This search will be something what ajax gives me on web, same logic same look and feel, if it's possible on desktop application.
A: Are you looking for something like an AutoComplete component for Java Swing?
SwingX has such a component. See here for the JavaDoc. It has a lot of utility methods to do various things, i.e. auto-completing a text box from the contents of a JList.
A: I strongly, strongly recommend that you take a look at Glazed Lists - this is one of the finer open source Java libraries out there, and it makes the bulk of what you are asking about super easy.
A: You will have to first attach a listener to the JTextFields Document to be notified whenever the user types in the field (or changes it).
From there, you can fire off any server-side code you need. The results of that can be used to update a listbox.
A few things to keep in mind:
*
*The code to do the search against the backend must be in another thread
*The code that updates the list box should update the list box's model
*You will need to manage all your backend search results so that you only update the listbox with the most recent result (e.g. user types 'A', backenf searches for that. Meanwhile, user has typed 'C', kicking off a backend search for 'AC'. You need to ensure the results from the 'A' search dont' make it to the listbox if the 'AC' search results are available).
A: Use Hibernate Search.
The SwingHack (http://oreilly.com/catalog/9780596009076/) book has an example of this.
A: In the interest of killing two birds with one stone: have a separate indexing thread. This will:
*
*Improve the speed of searches whenever they are executed.
*Improve the responsiveness of the UI since indexing is happening in a separate thread.
Of course, exactly how you perform the indexing will vary widely depending on your particular application. Here is a good place to start researching: Search Indexing. And please, ignore the reference to Web 3.0 [sic].
A: It is possible of course. It is simple too. For drop down list of terms just use popup menu. This is simple. The background processing of entered text is simple too. Enjoy!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: error when switching to different svn branch I've got two SVN branches (eg development and stable) and want to switch from one to another...
In every tutorial there is command like:
rootOfLocalSvnCopy:>svn switch urlToNewBranch .
But it leads in error in my case:
svn: REPORT request failed on '/svn/rootOfLocalSvnCopy/!svn/vcc/default'<br/>
svn: Cannot replace a directory from within
Every help that I found is about svn switch --relocate but I don't want to relocate, just to change my working copy to another branch
A: OK, I get it work.
Error was in dot that I used to specify local directory in a command. correct usage is without it, svn can handle it all itself:
rootOfLocalSvnCopy:>svn switch urlToNewBranch
(No dot at the end...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Which Reporting technology? Which reporting technology would fit for the best situation/type of product? I am now thinking of 3 technologies:
*
*Embedded Reports (Crystal Reports;MS Reporting services)
*Server reports (MS Reporting Services)
*OLAP Databases (MS Analysis Services)
Which report technology would you use for an off the shelf product? Is it possible to have a OLAP type based reporting side of things from a off the shelf product?
Which technology is best suited for historical data? I would guess here OLAP database would be quicker, but that would depend the size of the database, because I reckon you could also use Embedded Reports for historical data.
Which technology would be best for custom software solutions?
I like the idea of having reporting on the server where a user can go log in and run reports like with MS Reporting services. And really only have reports for stuff like invoices, bills, customer information sheet etc as Embedded reports. And also have Reporting services over an OLAP database for historical data.
Unfortunelaty does management not see this layout and wants a off the shelf product, with olap reporting right inside the application with all other reports.
A: I like reporting services. It can be used as you say, with the customer logging into the reporting services web site. But there is also a component you can add to your application which uses reporting services on the back end. Best of both worlds.
Also, you can access data in analysis services or any other database.
A: OLAP isn't a reporting platform, it's in the database layer.
If you're going to have a collection of pre-planned, canned reports, then Crystal or RS are the best ideas. Personally I prefer Crystal but it can be quite a pain to develop reports - but when they're approved, Crystal is a rock steady platform. (We integrate Crystal with .NET apps.)
RS integrates just as nicely, but you do have to maintain the server. Their big advantage is dynamic/reactive menuing, but they are just as tricky to develop and maintain when not quite perfect.
OLAP is a really powerful technology - but if you've not got local knowledge, it's a really challenging product to deploy accurately. But, again, it's not a reporting product - but there are some interesting layers on top of it (e.g. ProClarity, Excel plug-in).
A: Also you could take a look at (our very own) i-net Clear Reports (used to be i-net Crystal-Clear). Fully Java-based, can read Crystal Reports templates, and offer both a nice and simple API as well as a servlet for any major web server. Has nice charts using JFreeChart. Can export to PDF, HTML, SVG, as well as to a Swing Java Viewer you can embed into your own applications. We also offer a free and fully functional standalone report designer.
Costs a lot less than CR, also.
A: We are using XtraReports from DevExpress. The ratio price/productivity is very high and you can get source codes.
You can use it for desktop or web applications ( or export to pdf, doc, html, etc...) and end-user designer is delivered natively by DevExpress. I believe, this is one of the best reporting suite ( with Telerik Reports ).
A: I really like Reporting Services. You can embed reports into web pages, you can give users access to your reports over the web, you can even automate report delivery by having reports emailed to users at a set schedule. You can also create reports off OLAP databases. Plus Reporting Services comes with SQL Server so it can save some money.
A: Crystal reports is very easy and quick to use but it is also fairly limited. If all you need to do is slap some aggregate information onto a report, right out of a database, then crystal reports will be fine for you. Not sure about the others.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How much speed-up from converting 3D maths to SSE or other SIMD? I am using 3D maths in my application extensively. How much speed-up can I achieve by converting my vector/matrix library to SSE, AltiVec or a similar SIMD code?
A: In my experience I typically see about a 3x improvement in taking an algorithm from x87 to SSE, and a better than 5x improvement in going to VMX/Altivec (because of complicated issues having to do with pipeline depth, scheduling, etc). But I usually only do this in cases where I have hundreds or thousands of numbers to operate on, not for those where I'm doing one vector at a time ad hoc.
A: That's not the whole story, but it's possible to get further optimizations using SIMD, have a look at Miguel's presentation about when he implemented SIMD instructions with MONO which he held at PDC 2008,
(source: tirania.org)
Picture from Miguel's blog entry.
A: For some very rough numbers: I've heard some people on ompf.org claim 10x speed ups for some hand-optimized ray tracing routines. I've also had some good speed ups. I estimate I got somewhere between 2x and 6x on my routines depending on the problem, and many of these had a couple of unnecessary stores and loads. If you have a huge amount of branching in your code, forget about it, but for problems that are naturally data-parallel you can do quite well.
However, I should add that your algorithms should be designed for data-parallel execution.
This means that if you have a generic math library as you've mentioned then it should take packed vectors rather than individual vectors or you'll just be wasting your time.
E.g. Something like
namespace SIMD {
class PackedVec4d
{
__m128 x;
__m128 y;
__m128 z;
__m128 w;
//...
};
}
Most problems where performance matters can be parallelized since you'll most likely be working with a large dataset. Your problem sounds like a case of premature optimization to me.
A: For 3D operations beware of un-initialized data in your W component. I've seen cases where SSE ops (_mm_add_ps) would take 10x normal time because of bad data in W.
A: The answer highly depends on what the library is doing and how it is used.
The gains can go from a few percent points, to "several times faster", the areas most susceptible of seeing gains are those where you're not dealing with isolated vectors or values, but multiple vectors or values that have to be processed in the same way.
Another area is when you're hitting cache or memory limits, which, again, requires a lot of values/vectors being processed.
The domains where gains can be the most drastic are probably those of image and signal processing, computational simulations, as well general 3D maths operation on meshes (rather than isolated vectors).
A: These days all the good compilers for x86 generate SSE instructions for SP and DP float math by default. It's nearly always faster to use these instructions than the native ones, even for scalar operations, so long as you schedule them correctly. This will come as a surprise to many, who in the past found SSE to be "slow", and thought compilers could not generate fast SSE scalar instructions. But now, you have to use a switch to turn off SSE generation and use x87. Note that x87 is effectively deprecated at this point and may be removed from future processors entirely. The one down point of this is we may lose the ability to do 80bit DP float in register. But the consensus seems to be if you are depending on 80bit instead of 64bit DP floats for the precision, your should look for a more precision loss-tolerant algorithm.
Everything above came as a complete surprise to me. It's very counter intuitive. But data talks.
A: Most likely you will see only very small speedup, if any, and the process will be more complicated than expected. For more details see The Ubiquitous SSE vector class article by Fabian Giesen.
The Ubiquitous SSE vector class: Debunking a common myth
Not that important
First and foremost, your vector class is probably not as important for the performance of your program as you think (and if it is, it's more likely because you're doing something wrong than because the computations are inefficient). Don't get me wrong, it's probably going to be one of the most frequently used classes in your whole program, at least when doing 3D graphics. But just because vector operations will be common doesn't automatically mean that they'll dominate the execution time of your program.
Not so hot
Not easy
Not now
Not ever
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Do you know a good open-source version-control viewer? I'm looking for a tool like Atlassian's FishEye. The alternatives I've found so far (like StatCVS, ViewCVS or Bonsai) are either lacking in features or are quite a pain to install and maintain. So before staying with one of these tools, I'd like to be sure I did not miss any other good, easy to install, open-source (prefereably java) version control-viewer which supports cvs as scm.
A: Another SVN tool which has repository browsing capabilities is Trac. This is nice because as well as a browser for the repository it also has a timeline showing commits. It also does bug tracking.
A: Warehouse is pretty cool
A: There is also CVS Monitor, though it hasn't got the nearly the number of features as FishEye.
We use ViewCVS for repository browsing.
A: If you were using SVN I'd highly recommend Tortoise SVN.
A: ViewVC is a good open source, web based, repository viewer similar to FishEye. I know you've looked at it, and you're right, it was a hassle to set up, but once setup, it's run without any intervention for almost three years for us.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Does it make sense to mix an RTOS and cyclic executive? On a small embedded system project we have some code which we would like to run in a thread so we are electing to build in top of an embedded RTOS (eCos).
Previously, we have used a cyclic executive in main() that drove tasks each implemented as a state machine. For some tasks we encountered problems where the task would need to be broken up into many fine grained states thus making the code extra complex.
When switching to an RTOS we found the memory usage for each thread's stack adds up quickly if we give each separate task it's own thread. (we only have 64k and need the memory for our communications buffers)
We are considering using a tread for our communications task and an other thread for a cyclic executive. The cyclic executive will drive the other logical tasks.
Does it make sense to mix an RTOS and cyclic executive like this?
A: This is a perfectly valid design.
In one of our product, we used a similar design, where the asynchronous I/O channels (TCP/IP, 2 serial streams) were in their own tasks and we had a "main" task which would be responsible for multiple areas of functionality.
Think of tasks as simply a partitioning mechanism that allows you to simplify your design.
A: Yes, having a cyclic executive in one OS thread running multiple 'tasks' can make sense. In fact unless two tasks conflict with scheduling needs (one needs to block, one is higher priority than the other and the low-priority one takes a long time to execute, etc.), I'd recommend putting them in the same thread.
This is especially true in the case where you are using a light-weight RTOS with no memory protection: separate threads aren't any safer than one thread (no MMU protection of address spaces), in fact they are potentially more dangerous because of the greater need for concurrency protection. Even if your IPC scheme is robust and not susceptible to misuse by programmers, it's overhead is usually non-zero, so avoiding the need for it can result in performance gains.
A: If you look at FreeRTOS, they actually run another scheduler in a task, sort of :)
And to echo others, nothing sounds wrong in the design. No reason (some of) your tasks can't be state machines if there's a clear way to express something that way.
A: It is a valid design, but I think I missed the reason for having the OS at all.
What facilities of the OS are you planning to use?
From the information available it seems that you will end up moving the complexity of the tasks to your new main loop.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How can I consume a WSDL (SOAP) web service in Python? I want to use a WSDL SOAP based web service in Python. I have looked at the Dive Into Python code but the SOAPpy module does not work under Python 2.5.
I have tried using suds which works partly, but breaks with certain types (suds.TypeNotFound: Type not found: 'item').
I have also looked at Client but this does not appear to support WSDL.
And I have looked at ZSI but it looks very complex. Does anyone have any sample code for it?
The WSDL is https://ws.pingdom.com/soap/PingdomAPI.wsdl and works fine with the PHP 5 SOAP client.
A: Right now (as of 2008), all the SOAP libraries available for Python suck. I recommend avoiding SOAP if possible. The last time we where forced to use a SOAP web service from Python, we wrote a wrapper in C# that handled the SOAP on one side and spoke COM out the other.
A: I periodically search for a satisfactory answer to this, but no luck so far. I use soapUI + requests + manual labour.
I gave up and used Java the last time I needed to do this, and simply gave up a few times the last time I wanted to do this, but it wasn't essential.
Having successfully used the requests library last year with Project Place's RESTful API, it occurred to me that maybe I could just hand-roll the SOAP requests I want to send in a similar way.
Turns out that's not too difficult, but it is time consuming and prone to error, especially if fields are inconsistently named (the one I'm currently working on today has 'jobId', JobId' and 'JobID'. I use soapUI to load the WSDL to make it easier to extract endpoints etc and perform some manual testing. So far I've been lucky not to have been affected by changes to any WSDL that I'm using.
A: I would recommend that you have a look at SUDS
"Suds is a lightweight SOAP python client for consuming Web Services."
A: There is a relatively new library which is very promising and albeit still poorly documented, seems very clean and pythonic: python zeep.
See also this answer for an example.
A: It's not true SOAPpy does not work with Python 2.5 - it works, although it's very simple and really, really basic. If you want to talk to any more complicated webservice, ZSI is your only friend.
The really useful demo I found is at http://www.ebi.ac.uk/Tools/webservices/tutorials/python - this really helped me to understand how ZSI works.
A: I recently stumbled up on the same problem. Here is the synopsis of my solution:
Basic constituent code blocks needed
The following are the required basic code blocks of your client application
*
*Session request section: request a session with the provider
*Session authentication section: provide credentials to the provider
*Client section: create the Client
*Security Header section: add the WS-Security Header to the Client
*Consumption section: consume available operations (or methods) as needed
What modules do you need?
Many suggested to use Python modules such as urllib2 ; however, none of the modules work-at least for this particular project.
So, here is the list of the modules you need to get.
First of all, you need to download and install the latest version of suds from the following link:
pypi.python.org/pypi/suds-jurko/0.4.1.jurko.2
Additionally, you need to download and install requests and suds_requests modules from the following links respectively ( disclaimer: I am new to post in here, so I can't post more than one link for now).
pypi.python.org/pypi/requests
pypi.python.org/pypi/suds_requests/0.1
Once you successfully download and install these modules, you are good to go.
The code
Following the steps outlined earlier, the code looks like the following:
Imports:
import logging
from suds.client import Client
from suds.wsse import *
from datetime import timedelta,date,datetime,tzinfo
import requests
from requests.auth import HTTPBasicAuth
import suds_requests
Session request and authentication:
username=input('Username:')
password=input('password:')
session = requests.session()
session.auth=(username, password)
Create the Client:
client = Client(WSDL_URL, faults=False, cachingpolicy=1, location=WSDL_URL, transport=suds_requests.RequestsTransport(session))
Add WS-Security Header:
...
addSecurityHeader(client,username,password)
....
def addSecurityHeader(client,username,password):
security=Security()
userNameToken=UsernameToken(username,password)
timeStampToken=Timestamp(validity=600)
security.tokens.append(userNameToken)
security.tokens.append(timeStampToken)
client.set_options(wsse=security)
Please note that this method creates the security header depicted in Fig.1. So, your implementation may vary depending on the correct security header format provided by the owner of the service you are consuming.
Consume the relevant method (or operation) :
result=client.service.methodName(Inputs)
Logging:
One of the best practices in such implementations as this one is logging to see how the communication is executed. In case there is some issue, it makes debugging easy. The following code does basic logging. However, you can log many aspects of the communication in addition to the ones depicted in the code.
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
logging.getLogger('suds.transport').setLevel(logging.DEBUG)
Result:
Here is the result in my case. Note that the server returned HTTP 200. This is the standard success code for HTTP request-response.
(200, (collectionNodeLmp){
timestamp = 2014-12-03 00:00:00-05:00
nodeLmp[] =
(nodeLmp){
pnodeId = 35010357
name = "YADKIN"
mccValue = -0.19
mlcValue = -0.13
price = 36.46
type = "500 KV"
timestamp = 2014-12-03 01:00:00-05:00
errorCodeId = 0
},
(nodeLmp){
pnodeId = 33138769
name = "ZION 1"
mccValue = -0.18
mlcValue = -1.86
price = 34.75
type = "Aggregate"
timestamp = 2014-12-03 01:00:00-05:00
errorCodeId = 0
},
})
A: Zeep is a decent SOAP library for Python that matches what you're asking for: http://docs.python-zeep.org
A: If you're rolling your own I'd highly recommend looking at http://effbot.org/zone/element-soap.htm.
A: SOAPpy is now obsolete, AFAIK, replaced by ZSL. It's a moot point, because I can't get either one to work, much less compile, on either Python 2.5 or Python 2.6
A: #!/usr/bin/python
# -*- coding: utf-8 -*-
# consume_wsdl_soap_ws_pss.py
import logging.config
from pysimplesoap.client import SoapClient
logging.config.dictConfig({
'version': 1,
'formatters': {
'verbose': {
'format': '%(name)s: %(message)s'
}
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
},
'loggers': {
'pysimplesoap.helpers': {
'level': 'DEBUG',
'propagate': True,
'handlers': ['console'],
},
}
})
WSDL_URL = 'http://www.webservicex.net/stockquote.asmx?WSDL'
client = SoapClient(wsdl=WSDL_URL, ns="web", trace=True)
client['AuthHeaderElement'] = {'username': 'someone', 'password': 'nottelling'}
#Discover operations
list_of_services = [service for service in client.services]
print(list_of_services)
#Discover params
method = client.services['StockQuote']
response = client.GetQuote(symbol='GOOG')
print('GetQuote: {}'.format(response['GetQuoteResult']))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "136"
}
|
Q: How can the error 'Client found response content type of 'text/html'.. be interpreted I'm using C# and connecting to a WebService via an auto-generated C# proxy object. The method I'm calling can be long running, and sometimes times out. I get different errors back, sometimes I get a System.Net.WebException or a System.Web.Services.Protocols.SoapException. These exceptions have properties I can interrogate to find the specific type of error from which I can display a human-friendly version of to the user.
But sometimes I just get an InvalidOperationException, and it has the following Message. Is there any way I can interpret what this is without digging through the string for things I recognize, that feels very dirty, and isn't internationalization agnostic, the error message might come back in a different language.
Client found response content type of 'text/html; charset=utf-8', but expected 'text/xml'.
The request failed with the error message:
--
<html>
<head>
<title>Request timed out.</title>
<style>
body {font-family:"Verdana";font-weight:normal;font-size: .7em;color:black;}
p {font-family:"Verdana";font-weight:normal;color:black;margin-top: -5px}
b {font-family:"Verdana";font-weight:bold;color:black;margin-top: -5px}
H1 { font-family:"Verdana";font-weight:normal;font-size:18pt;color:red }
H2 { font-family:"Verdana";font-weight:normal;font-size:14pt;color:maroon }
pre {font-family:"Lucida Console";font-size: .9em}
.marker {font-weight: bold; color: black;text-decoration: none;}
.version {color: gray;}
.error {margin-bottom: 10px;}
.expandable { text-decoration:underline; font-weight:bold; color:navy; cursor:hand; }
</style>
</head>
<body bgcolor="white">
<span><H1>Server Error in '/PerformanceManager' Application.<hr width=100% size=1 color=silver></H1>
<h2> <i>Request timed out.</i> </h2></span>
<font face="Arial, Helvetica, Geneva, SunSans-Regular, sans-serif ">
<b> Description: </b>An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
<br><br>
<b> Exception Details: </b>System.Web.HttpException: Request timed out.<br><br>
<b>Source Error:</b> <br><br>
<table width=100% bgcolor="#ffffcc">
<tr>
<td>
<code>
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.</code>
</td>
</tr>
</table>
<br>
<b>Stack Trace:</b> <br><br>
<table width=100% bgcolor="#ffffcc">
<tr>
<td>
<code><pre>
[HttpException (0x80004005): Request timed out.]
</pre></code>
</td>
</tr>
</table>
<br>
<hr width=100% size=1 color=silver>
<b>Version Information:</b> Microsoft .NET Framework Version:2.0.50727.312; ASP.NET Version:2.0.50727.833
</font>
</body>
</html>
<!--
[HttpException]: Request timed out.
-->
--.
Edit:
I have a try-catch around the method on the web-server. I have debugged it, and the web-server method returns (after a minute or so) without any exception. I also added an unhandled exception handler in the web service and a breakpoint there wasn't hit. As soon as the web-service returns, I get this error in the client instead of the result I expected.
A: If you are using .NET version 4.0. the validateRequestion is turned on by default for all the pages. in previous versions 1.1 and 2.0 it was only for aspx page. You can turn the default validation off. In that case you have to do the due diligence and make sure that the data is clean. Use HtmlEncode. Do the following to turn the validation off
In the web.config
add the following lines for system.web
<httpRuntime requestValidationMode="2.0" />
and
<pages validateRequest="false" />
You can read more about this http://www.asp.net/learn/whitepapers/aspnet4/breaking-changes
also http://msdn.microsoft.com/en-us/library/ff649310.aspx
Hope this helps.
A: The webserver is returning an http 500 error code. These errors generally happen when an exception in thrown on the webserver and there's no logic to catch it so it spits out an http 500 error. You can usually resolve the problem by placing try-catch blocks in your code.
A: I had this happen as a result of a configuration error in web.config. Checking the connection string etc might be the answer for the time out.
A: This is happening because there is an unhandled exception in your Web service, and the .NET runtime is spitting out its HTML yellow screen of death server error/exception dump page, instead of XML.
Since the consumer of your Web service was expecting a text/xml header and instead got text/html, it throws that error.
You should address the cause of your timeouts (perhaps a lengthy SQL query?).
Also, checkout this blog post on Jeff Atwood's blog that explains implementing a global unhandled exception handler and using SOAP exceptions.
A: Delete web.config file and insert again. http://forums.asp.net/post/916808.aspx
A: Is your webservice configured correctly in IIS? The pool its using, the version of ASP.NET (2.0) is set? Can you browse the .asmx?
Talking about exceptions, try to put an try-catch block in the line that access your webservice. Put and catch(System.Web.Services.Protocolos.SoapException).
Also, you can set a Timeout for your webservice object.
A: I had got this error after changing the web service return type and SoapDocumentMethod.
Initially it was:
[WebMethod]
public int Foo()
{
return 0;
}
I decided to make it fire and forget type like this:
[SoapDocumentMethod(OneWay = true)]
[WebMethod]
public void Foo()
{
return;
}
In such cases, updating the web reference helped.
To update a web service reference:
*
*Expand solution explorer
*Locate Web References - this will be visible only if you have added a web service reference in your project
*Right click and click update web reference
A: That means that your consumer is expecting XML from the webservice but the webservice, as your error shows, returns HTML because it's failing due to a timeout.
So you need to talk to the remote webservice provider to let them know it's failing and take corrective action. Unless you are the provider of the webservice in which case you should catch the exceptions and return XML telling the consumer which error occurred (the 'remote provider' should probably do that as well).
A: The problem I had was related to SOAP version. The asmx service was configured to accept both versions, 1.1 and 1.2, so, I think that when you are consuming the service, the client or the server doesn't know what version resolve.
To fix that, is necessary add:
using (wsWebService yourService = new wsWebService())
{
yourService.Url = "https://myUrlService.com/wsWebService.asmx?op=someOption";
yourService.UseDefaultCredentials = true; // this line depends on your authentication type
yourService.SoapVersion = SoapProtocolVersion.Soap11; // asign the version of SOAP
var result = yourService.SomeMethod("Parameter");
}
Where wsWebService is the name of the class generated as a reference.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
}
|
Q: How can I do Databinding in c#? I have the following class
public class Car
{
public Name {get; set;}
}
and I want to bind this programmatically to a text box.
How do I do that?
Shooting in the dark:
...
Car car = new Car();
TextEdit editBox = new TextEdit();
editBox.DataBinding.Add("Name", car, "Car - Name");
...
I get the following error
"Cannot bind to the propery 'Name' on the target control.
What am I doing wrong and how should I be doing this? I am finding the databinding concept a bit difficult to grasp coming from web-development.
A: editBox.DataBinding.Add("Text", car, "Name");
First arg is the name of the control property, the second is the object to bind, and the last, the name of the object property you want to use as the data source.
A: You are quite close the data bindings line would be
editBox.DataBinding.Add("Text", car, "Name");
This first parameter is the property of your editbox object that will be data bound. The second parameter is the data source you are binding to and the last parameter is the property on the data source that you want to bind to.
Bear in mind that the data binding is one way so if you change the edit box then the car object gets updated but if you change the car name directly the edit box is not updated.
A: You want
editBox.DataBindings.Add("Text", car, "Name");
The first parameter is the name of the property on the control that you want to be databound, the second is the data source, the third parameter is the property on the data source that you want to bind to.
A: Try:
editBox.DataBinding.Add( "Text", car", "Name" );
A: I believe that
editBox.DataBindings.Add(new Binding("Text", car, "Name"));
should do the trick. Didn't try it out, but I think that's the idea.
A: Using C# 4.6 syntax:
editBox.DataBinding.Add(nameof(editBox.Text), car, nameof(car.Name));
if car is null, then the above code will fail in a more conspicuous way than using literal string to represent the datamember of car
A: You're trying to bind to the "Name" of the TextEdit control. The name is used for accessing the control programmatically, and cannot be bound against. You should be binding against the Text of the control.
A: Without looking at the syntax, I'm pretty sure it's:
editBox.DataBinding.Add("Text", car, "Name");
A: The following is generic class that can be used as a property and implements INotifyPropertyChanged used by bound controls to capture changes in the property value.
public class NotifyValue<datatype> : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged = delegate { };
datatype _value;
public datatype Value
{
get
{
return _value;
}
set
{
_value = value;
PropertyChanged.Invoke(this, new PropertyChangedEventArgs("Value"));
}
}
}
It can be declared like this:
public NotifyValue<int> myInteger = new NotifyValue<int>();
and assigned to a textbox like this
Textbox1.DataBindings.Add(
"Text",
this,
"myInteger.Value",
false,
DataSourceUpdateMode.OnPropertyChanged
);
..where "Text" is the property of the textbox, 'this' is current Form instance.
A class does not have to inherit the INotifyPropertyChanged class. Once you declare an event of type System.ComponentModel.PropertyChangedEventHandler the class change event will be subscribed to by the controls databinder
A: it's
this.editBox.DataBindings.Add(new Binding("Text", car, "Name"));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
}
|
Q: What is the best way to handle incoming SMS messages? I have a client who wants a solution to allow delivery people to text (SMS messaging) in that they have completed a pick up at a particular location. What I'm looking for is Code to read an imbound SMS message or a SMS component if appropiate. This would allow me to create a windows service to read the message and update a SQL record accordingly.
A: Probably not quite what you're looking for but one approach is to use a gateway like iTagg which provides a number of interfaces for developers to send and receive SMS/MMS etc. Depending on your location, iTagg may be no use but I'm sure there'll be an equivalent for your region.
A: Sometime ago I implemented something similar using a GSM modem. I think most of the GSM modems offer AT commands that can be used for receiving and sending SMS messages. At the time, I used a library in Java that provided a easy to use API. The commands to read and send SMS are really easy but I bet there is something in .Net for that purpose that can make the task even easier.
I made a little search and I found this article with an example of using AT commands to interact with a GSM phone. I looked into the supplied source and it includes a library with operations related to SMS.
In my previous project I used a Siemens GSM modem with a RS232 interface. It wasn't very expensive and was able to manage all the messages sent by onboard units placed in vehicles. But if you have a unused phone it can work as well.
A: Thanks Luke, I am thinking more of a GSM modem which would be connected to the server. I think this would give more control rather than go through a third party, but I take your point and will investigate further.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Do you use source control for your database items? I feel that my shop has a hole because we don't have a solid process in place for versioning our database schema changes. We do a lot of backups so we're more or less covered, but it's bad practice to rely on your last line of defense in this way.
Surprisingly, this seems to be a common thread. Many shops I have spoken to ignore this issue because their databases don't change often, and they basically just try to be meticulous.
However, I know how that story goes. It's only a matter of time before things line up just wrong and something goes missing.
Are there any best practices for this? What are some strategies that have worked for you?
A: The new Database projects in Visual Studio provide source control and change scripts.
They have a nice tool that compares databases and can generate a script that converts the schema of one into the other, or updates the data in one to match the other.
The db schema is "shredded" to create many, many small .sql files, one per DDL command that describes the DB.
+tom
Additional info 2008-11-30
I have been using it as a developer for the past year and really like it. It makes it easy to compare my dev work to production and generate a script to use for the release. I don't know if it is missing features that DBAs need for "enterprise-type" projects.
Because the schema is "shredded" into sql files the source control works fine.
One gotcha is that you need to have a different mindset when you use a db project. The tool has a "db project" in VS, which is just the sql, plus an automatically generated local database which has the schema and some other admin data -- but none of your application data, plus your local dev db that you use for app data dev work. You rarely are aware of the automatically generated db, but you have to know its there so you can leave it alone :). This special db is clearly recognizable because it has a Guid in its name,
The VS DB Project does a nice job of integrating db changes that other team members have made into your local project/associated db. but you need to take the extra step to compare the project schema with your local dev db schema and apply the mods. It makes sense, but it seems awkward at first.
DB Projects are a very powerful tool. They not only generate scripts but can apply them immediately. Be sure not to destroy your production db with it. ;)
I really like the VS DB projects and I expect to use this tool for all my db projects going forward.
+tom
A: Requiring the development teams to use an SQL database source control management system isn’t the magic bullet which will prevent issues from happening. On its own, database source control introduces additional overhead as the developers are required to save the changes they’ve made to an object in a separate SQL script, open the source control system client, check in the SQL script file using the client and then apply the changes to the live database.
I can suggest using the SSMS add-in called ApexSQL Source Control. It allows developers to easily map database objects with the source control system via the wizard directly from SSMS. The add-in includes support for TFS, Git, Subversion and other SC systems. It also includes support for source controlling Static data.
After downloading and installing ApexSQL Source Control, simply right-click the database you want to version control and navigate to ApexSQL Source Control sub-menu in SSMS. Click the Link database to source control option, select the source control system and the development model. After that you’ll need to provide the log-in information and the repository string for the source control system you’ve chosen.
You can read this article for more information: http://solutioncenter.apexsql.com/sql-source-control-reduce-database-development-time/
A: I do by saving create/update scripts and a script that generates sampledata.
A: Yes, we do it by keeping our SQL as part of our build -- we keep DROP.sql, CREATE.sql, USERS.sql, VALUES.sql and version control these, so we can revert back to any tagged version.
We also have ant tasks which can recreate the db whenever needed.
Plus, the SQL is then tagged along with your source code that goes with it.
A: The most successful scheme I've ever used on a project has combined backups and differential SQL files. Basically we would take a backup of our db after every release and do an SQL dump so that we could create a blank schema from scratch if we needed to as well. Then anytime you needed to make a change to the DB you would add an alter scrip to the sql directory under version control. We would always prefix a sequence number or date to the file name so the first change would be something like 01_add_created_on_column.sql, and the next script would be 02_added_customers_index. Our CI machine would check for these and run them sequentially on a fresh copy of the db that had been restored from the backup.
We also had some scripts in place that devs could use to re-initialize their local db to the current version with a single command.
A: We do source control all our dabase created objects. And just to keep developers honest (because you can create objects without them being in Source Control), our dbas periodically look for anything not in source control and if they find anything, they drop it without asking if it is ok.
A: I use SchemaBank to version control all my database schema changes:
*
*from day 1, I import my db schema dump into it
*i started to change my schema design using a web browser (because they are SaaS / cloud-based)
*when i want to update my db server, i generate the change (SQL) script from it and apply to the db. In Schemabank, they mandate me to commit my work as a version before I can generate an update script. I like this kind of practice so that I can always trace back when I need to.
Our team rule is NEVER touch the db server directly without storing the design work first. But it happens, somebody might be tempted to break the rule, in sake of convenient. We would import the schema dump again into schemabank and let it do the diff and bash someone if a discrepancy is found. Although we could generate the alter scripts from it to make our db and schema design in sync, we just hate that.
By the way, they also let us create branches within the version control tree so that I can maintain one for staging and one for production. And one for coding sandbox.
A pretty neat web-based schema design tool with version control n change management.
A: I have everything necessary to recreate my DB from bare metal, minus the data itself. I'm sure there are lots of ways to do it, but all my scripts and such are stored off in subversion and we can rebuild the DB structure and such by pulling all that out of subversion and running an installer.
A: I typically build an SQL script for every change I make, and another to revert those changes, and keep those scripts under version control.
Then we have a means to create a new up-to-date database on demand, and can easily move between revisions. Every time we do a release, we lump the scripts together (takes a bit of manual work, but it's rarely actually hard) so we also have a set of scripts that can convert between versions.
Yes, before you say it, this is very similar to the stuff Rails and others do, but it seems to work pretty well, so I have no problems admitting that I shamelessly lifted the idea :)
A: I use SQL CREATE scripts exported from MySQL Workbech, then using theirs "Export SQL ALTER" functionality I end up with a series of create scripts(numbered of course) and the alter scripts that can apply the changes between them.
3.- Export SQL ALTER script
Normally you would have to write the ALTER TABLE statements by hand now, reflecting your changes you made to the model. But you can be smart and let Workbench do the hard work for you. Simply select File -> Export -> Forward Engineer SQL ALTER Script… from the main menu.
This will prompt you to specify the SQL CREATE file the current model should be compared to.
Select the SQL CREATE script from step 1. The tool will then generate the ALTER TABLE script for you and you can execute this script against your database to bring it up to date.
You can do this using the MySQL Query Browser or the mysql client.Voila! Your model and database have now been synchronized!
Source: MySQL Workbench Community Edition: Guide to Schema Synchronization
All this scripts of course are inside under version control.
A: Yes, always. You should be able to recreate your production database structure with a useful set of sample data whenever needed. If you don't, over time minor changes to keep things running get forgotten then one day you get bitten, big time. Its insurance that you might not think you need but the day you do it it worth the price 10 times over!
A: There has been a lot of discussion about the database model itself, but we also keep the required data in .SQL files.
For example, in order to be useful your application might need this in the install:
INSERT INTO Currency (CurrencyCode, CurrencyName)
VALUES ('AUD', 'Australian Dollars');
INSERT INTO Currency (CurrencyCode, CurrencyName)
VALUES ('USD', 'US Dollars');
We would have a file called currency.sql under subversion. As a manual step in the build process, we compare the previous currency.sql to the latest one and write an upgrade script.
A: We version and source control everything surrounding our databases:
*
*DDL (create and alters)
*DML (reference data, codes, etc.)
*Data Model changes (using ERwin or ER/Studio)
*Database configuration changes (permissions, security objects, general config changes)
We do all this with automated jobs using Change Manager and some custom scripts. We have Change Manager monitoring these changes and notifying when they are done.
A: I believe that every DB should be under source control, and developers should have an easy way to create their local database from scratch. Inspired by Visual Studio for Database Professionals, I've created an open-source tool that scripts MS SQL databases, and provides and easy way of deploying them to your local DB engine. Try http://dbsourcetools.codeplex.com/ . Have fun,
- Nathan.
A: Must read Get your database under version control. Check the series of posts by K. Scott Allen.
When it comes to version control, the database is often a second or even third-class citizen. From what I've seen, teams that would never think of writing code without version control in a million years-- and rightly so-- can somehow be completely oblivious to the need for version control around the critical databases their applications rely on. I don't know how you can call yourself a software engineer and maintain a straight face when your database isn't under exactly the same rigorous level of source control as the rest of your code. Don't let this happen to you. Get your database under version control.
A: I absolutely love Rails ActiveRecord migrations. It abstracts the DML to ruby script which can then be easily version'd in your source repository.
However, with a bit of work, you could do the same thing. Any DDL changes (ALTER TABLE, etc.) can be stored in text files. Keep a numbering system (or a date stamp) for the file names, and apply them in sequence.
Rails also has a 'version' table in the DB that keeps track of the last applied migration. You can do the same easily.
A: Check out LiquiBase for managing database changes using source control.
A: You should never just log in and start entering "ALTER TABLE" commands to change a production database. The project I'm on has database on every customer site, and so every change to the database is made in two places, a dump file that is used to create a new database on a new customer site, and an update file that is run on every update which checks your current database version number against the highest number in the file, and updates your database in place. So for instance, the last couple of updates:
if [ $VERSION \< '8.0.108' ] ; then
psql -U cosuser $dbName << EOF8.0.108
BEGIN TRANSACTION;
--
-- Remove foreign key that shouldn't have been there.
-- PCR:35665
--
ALTER TABLE migratorjobitems
DROP CONSTRAINT migratorjobitems_destcmaid_fkey;
--
-- Increment the version
UPDATE sys_info
SET value = '8.0.108'
WHERE key = 'DB VERSION';
END TRANSACTION;
EOF8.0.108
fi
if [ $VERSION \< '8.0.109' ] ; then
psql -U cosuser $dbName << EOF8.0.109
BEGIN TRANSACTION;
--
-- I missed a couple of cases when I changed the legacy playlist
-- from reporting showplaylistidnum to playlistidnum
--
ALTER TABLE featureidrequestkdcs
DROP CONSTRAINT featureidrequestkdcs_cosfeatureid_fkey;
ALTER TABLE featureidrequestkdcs
ADD CONSTRAINT featureidrequestkdcs_cosfeatureid_fkey
FOREIGN KEY (cosfeatureid)
REFERENCES playlist(playlistidnum)
ON DELETE CASCADE;
--
ALTER TABLE ticket_system_ids
DROP CONSTRAINT ticket_system_ids_showplaylistidnum_fkey;
ALTER TABLE ticket_system_ids
RENAME showplaylistidnum
TO playlistidnum;
ALTER TABLE ticket_system_ids
ADD CONSTRAINT ticket_system_ids_playlistidnum_fkey
FOREIGN KEY (playlistidnum)
REFERENCES playlist(playlistidnum)
ON DELETE CASCADE;
--
-- Increment the version
UPDATE sys_info
SET value = '8.0.109'
WHERE key = 'DB VERSION';
END TRANSACTION;
EOF8.0.109
fi
I'm sure there is a better way to do this, but it's worked for me so far.
A: I source control the database schema by scripting out all objects (table definitions, indexes, stored procedures, etc.). But, as for the data itself, simply rely on regular backups. This ensures that all structural changes are captured with proper revision history, but doesn't burden the database each time data changes.
A: At our business we use database change scripts. When a script is run, it's name is stored in the database and won't run again, unless that row is removed. Scripts are named based on date, time and code branch, so controlled execution is possible.
Lots and lots of testing is done before the scripts are run in the live environment, so "oopsies" only happen, generally speaking, on development databases.
A: We're in the process of moving all the databases to source control. We're using sqlcompare to script out the database (a profession edition feature, unfortunately) and putting that result into SVN.
The success of your implementation will depend a lot on the culture and practices of your organization. People here believe in creating a database per application. There is a common set of databases that are used by most applications as well causing a lot of interdatabase dependencies (some of them are circular). Putting the database schemas into source control has been notoriously difficult because of the interdatabase dependencies that our systems have.
Best of luck to you, the sooner you try it out the sooner you'll have your issues sorted out.
A: I have used the dbdeploy tool from ThoughtWorks at http://dbdeploy.com/. It encourages the use of migration scripts. Each release, we consolidated the change scripts into a single file to ease understanding and to allow DBAs to 'bless' the changes.
A: This has always been a big annoyance for me too - it seems like it is just way too easy to make a quick change to your development database, save it (forgetting to save a change script), and then you're stuck. You could undo what you just did and redo it to create the change script, or write it from scratch if you want of course too, though that's a lot of time spent writing scripts.
A tool that I have used in the past that has helped with this some is SQL Delta. It will show you the differences between two databases (SQL server/Oracle I believe) and generate all the change scripts necessary to migrate A->B. Another nice thing it does is show all the differences between database content between the production (or test) DB and your development DB. Since more and more apps store configuration and state that is crucial to their execution in database tables, it can be a real pain to have change scripts that remove, add, and alter the proper rows. SQL Delta shows the rows in the database just like they would look in a Diff tool - changed, added, deleted.
An excellent tool. Here is the link:
http://www.sqldelta.com/
A: RedGate is great, we generate new snapshots when database changes are made (a tiny binary file) and keep that file in the projects as a resource. Whenever we need to update the database, we use RedGate's toolkit to update the database, as well as being able to create new databases from empty ones.
RedGate also makes Data snapshots, while I haven't personally worked with them, they are just as robust.
A: FYI This was also brought up a few days ago by Dana ... Stored procedures/DB schema in source control
A: Here is a sample poor man's solution for a trigger implementing tracking of changes on db objects ( via DDL stateements ) on a sql server 2005 / 2008 database. I contains also a simple sample of how-to enforce the usage of required someValue xml tag in the source code for each sql command ran on the database + the tracking of the current db version and type ( dev , test , qa , fb , prod)
One could extend it with additional required attributes such as , etc.
The code is rather long - it creates the empty database + the needed tracking table structure + required db functions and the populating trigger all running under a [ga] schema.
USE [master]
GO
/****** Object: Database [DBGA_DEV] Script Date: 04/22/2009 13:22:01 ******/
CREATE DATABASE [DBGA_DEV] ON PRIMARY
( NAME = N'DBGA_DEV', FILENAME = N'D:\GENAPP\DATA\DBFILES\DBGA_DEV.mdf' , SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N'DBGA_DEV_log', FILENAME = N'D:\GENAPP\DATA\DBFILES\DBGA_DEV_log.ldf' , SIZE = 6208KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO
ALTER DATABASE [DBGA_DEV] SET COMPATIBILITY_LEVEL = 100
GO
IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
begin
EXEC [DBGA_DEV].[dbo].[sp_fulltext_database] @action = 'enable'
end
GO
ALTER DATABASE [DBGA_DEV] SET ANSI_NULL_DEFAULT OFF
GO
ALTER DATABASE [DBGA_DEV] SET ANSI_NULLS OFF
GO
ALTER DATABASE [DBGA_DEV] SET ANSI_PADDING ON
GO
ALTER DATABASE [DBGA_DEV] SET ANSI_WARNINGS OFF
GO
ALTER DATABASE [DBGA_DEV] SET ARITHABORT OFF
GO
ALTER DATABASE [DBGA_DEV] SET AUTO_CLOSE OFF
GO
ALTER DATABASE [DBGA_DEV] SET AUTO_CREATE_STATISTICS ON
GO
ALTER DATABASE [DBGA_DEV] SET AUTO_SHRINK OFF
GO
ALTER DATABASE [DBGA_DEV] SET AUTO_UPDATE_STATISTICS ON
GO
ALTER DATABASE [DBGA_DEV] SET CURSOR_CLOSE_ON_COMMIT OFF
GO
ALTER DATABASE [DBGA_DEV] SET CURSOR_DEFAULT GLOBAL
GO
ALTER DATABASE [DBGA_DEV] SET CONCAT_NULL_YIELDS_NULL OFF
GO
ALTER DATABASE [DBGA_DEV] SET NUMERIC_ROUNDABORT OFF
GO
ALTER DATABASE [DBGA_DEV] SET QUOTED_IDENTIFIER OFF
GO
ALTER DATABASE [DBGA_DEV] SET RECURSIVE_TRIGGERS OFF
GO
ALTER DATABASE [DBGA_DEV] SET DISABLE_BROKER
GO
ALTER DATABASE [DBGA_DEV] SET AUTO_UPDATE_STATISTICS_ASYNC OFF
GO
ALTER DATABASE [DBGA_DEV] SET DATE_CORRELATION_OPTIMIZATION OFF
GO
ALTER DATABASE [DBGA_DEV] SET TRUSTWORTHY OFF
GO
ALTER DATABASE [DBGA_DEV] SET ALLOW_SNAPSHOT_ISOLATION OFF
GO
ALTER DATABASE [DBGA_DEV] SET PARAMETERIZATION SIMPLE
GO
ALTER DATABASE [DBGA_DEV] SET READ_COMMITTED_SNAPSHOT OFF
GO
ALTER DATABASE [DBGA_DEV] SET HONOR_BROKER_PRIORITY OFF
GO
ALTER DATABASE [DBGA_DEV] SET READ_WRITE
GO
ALTER DATABASE [DBGA_DEV] SET RECOVERY FULL
GO
ALTER DATABASE [DBGA_DEV] SET MULTI_USER
GO
ALTER DATABASE [DBGA_DEV] SET PAGE_VERIFY CHECKSUM
GO
ALTER DATABASE [DBGA_DEV] SET DB_CHAINING OFF
GO
EXEC [DBGA_DEV].sys.sp_addextendedproperty @name=N'DbType', @value=N'DEV'
GO
EXEC [DBGA_DEV].sys.sp_addextendedproperty @name=N'DbVersion', @value=N'0.0.1.20090414.1100'
GO
USE [DBGA_DEV]
GO
/****** Object: Schema [ga] Script Date: 04/22/2009 13:21:29 ******/
CREATE SCHEMA [ga] AUTHORIZATION [dbo]
GO
EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Contains the objects of the Generic Application database' , @level0type=N'SCHEMA',@level0name=N'ga'
GO
/****** Object: Table [ga].[tb_DataMeta_ObjChangeLog] Script Date: 04/22/2009 13:21:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [ga].[tb_DataMeta_ObjChangeLog](
[LogId] [int] IDENTITY(1,1) NOT NULL,
[TimeStamp] [timestamp] NOT NULL,
[DatabaseName] [varchar](256) NOT NULL,
[SchemaName] [varchar](256) NOT NULL,
[DbVersion] [varchar](20) NOT NULL,
[DbType] [varchar](20) NOT NULL,
[EventType] [varchar](50) NOT NULL,
[ObjectName] [varchar](256) NOT NULL,
[ObjectType] [varchar](25) NOT NULL,
[Version] [varchar](50) NULL,
[SqlCommand] [varchar](max) NOT NULL,
[EventDate] [datetime] NOT NULL,
[LoginName] [varchar](256) NOT NULL,
[FirstName] [varchar](256) NULL,
[LastName] [varchar](50) NULL,
[ChangeDescription] [varchar](1000) NULL,
[Description] [varchar](1000) NULL,
[ObjVersion] [varchar](20) NOT NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'The database version as written in the extended prop of the database' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'TABLE',@level1name=N'tb_DataMeta_ObjChangeLog', @level2type=N'COLUMN',@level2name=N'DbVersion'
GO
EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'dev , test , qa , fb or prod' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'TABLE',@level1name=N'tb_DataMeta_ObjChangeLog', @level2type=N'COLUMN',@level2name=N'DbType'
GO
EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'The name of the object as it is registered in the sys.objects ' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'TABLE',@level1name=N'tb_DataMeta_ObjChangeLog', @level2type=N'COLUMN',@level2name=N'ObjectName'
GO
EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'TABLE',@level1name=N'tb_DataMeta_ObjChangeLog', @level2type=N'COLUMN',@level2name=N'Description'
GO
SET IDENTITY_INSERT [ga].[tb_DataMeta_ObjChangeLog] ON
INSERT [ga].[tb_DataMeta_ObjChangeLog] ([LogId], [DatabaseName], [SchemaName], [DbVersion], [DbType], [EventType], [ObjectName], [ObjectType], [Version], [SqlCommand], [EventDate], [LoginName], [FirstName], [LastName], [ChangeDescription], [Description], [ObjVersion]) VALUES (3, N'DBGA_DEV', N'en', N'0.0.1.20090414.1100', N'DEV', N'DROP_TABLE', N'tb_BL_Products', N'TABLE', N' some', N'<EVENT_INSTANCE><EventType>DROP_TABLE</EventType><PostTime>2009-04-22T11:03:11.880</PostTime><SPID>57</SPID><ServerName>YSG</ServerName><LoginName>ysg\yordgeor</LoginName><UserName>dbo</UserName><DatabaseName>DBGA_DEV</DatabaseName><SchemaName>en</SchemaName><ObjectName>tb_BL_Products</ObjectName><ObjectType>TABLE</ObjectType><TSQLCommand><SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE"/><CommandText>drop TABLE [en].[tb_BL_Products] --<Version> some</Version>
</CommandText></TSQLCommand></EVENT_INSTANCE>', CAST(0x00009BF300B6271C AS DateTime), N'ysg\yordgeor', N'Yordan', N'Georgiev', NULL, NULL, N'0.0.0')
INSERT [ga].[tb_DataMeta_ObjChangeLog] ([LogId], [DatabaseName], [SchemaName], [DbVersion], [DbType], [EventType], [ObjectName], [ObjectType], [Version], [SqlCommand], [EventDate], [LoginName], [FirstName], [LastName], [ChangeDescription], [Description], [ObjVersion]) VALUES (4, N'DBGA_DEV', N'en', N'0.0.1.20090414.1100', N'DEV', N'CREATE_TABLE', N'tb_BL_Products', N'TABLE', N' 2.2.2 ', N'<EVENT_INSTANCE><EventType>CREATE_TABLE</EventType><PostTime>2009-04-22T11:03:18.620</PostTime><SPID>57</SPID><ServerName>YSG</ServerName><LoginName>ysg\yordgeor</LoginName><UserName>dbo</UserName><DatabaseName>DBGA_DEV</DatabaseName><SchemaName>en</SchemaName><ObjectName>tb_BL_Products</ObjectName><ObjectType>TABLE</ObjectType><TSQLCommand><SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE"/><CommandText>CREATE TABLE [en].[tb_BL_Products](
[ProducId] [int] NULL,
[ProductName] [nchar](10) NULL,
[ProductDescription] [varchar](5000) NULL
) ON [PRIMARY]
/*
<Version> 2.2.2 </Version>

*/
</CommandText></TSQLCommand></EVENT_INSTANCE>', CAST(0x00009BF300B62F07 AS DateTime), N'ysg\yordgeor', N'Yordan', N'Georgiev', NULL, NULL, N'0.0.0')
INSERT [ga].[tb_DataMeta_ObjChangeLog] ([LogId], [DatabaseName], [SchemaName], [DbVersion], [DbType], [EventType], [ObjectName], [ObjectType], [Version], [SqlCommand], [EventDate], [LoginName], [FirstName], [LastName], [ChangeDescription], [Description], [ObjVersion]) VALUES (5, N'DBGA_DEV', N'en', N'0.0.1.20090414.1100', N'DEV', N'DROP_TABLE', N'tb_BL_Products', N'TABLE', N' 2.2.2 ', N'<EVENT_INSTANCE><EventType>DROP_TABLE</EventType><PostTime>2009-04-22T11:25:12.620</PostTime><SPID>57</SPID><ServerName>YSG</ServerName><LoginName>ysg\yordgeor</LoginName><UserName>dbo</UserName><DatabaseName>DBGA_DEV</DatabaseName><SchemaName>en</SchemaName><ObjectName>tb_BL_Products</ObjectName><ObjectType>TABLE</ObjectType><TSQLCommand><SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE"/><CommandText>drop TABLE [en].[tb_BL_Products] 
</CommandText></TSQLCommand></EVENT_INSTANCE>', CAST(0x00009BF300BC32F1 AS DateTime), N'ysg\yordgeor', N'Yordan', N'Georgiev', NULL, NULL, N'0.0.0')
INSERT [ga].[tb_DataMeta_ObjChangeLog] ([LogId], [DatabaseName], [SchemaName], [DbVersion], [DbType], [EventType], [ObjectName], [ObjectType], [Version], [SqlCommand], [EventDate], [LoginName], [FirstName], [LastName], [ChangeDescription], [Description], [ObjVersion]) VALUES (6, N'DBGA_DEV', N'en', N'0.0.1.20090414.1100', N'DEV', N'CREATE_TABLE', N'tb_BL_Products', N'TABLE', N' 2.2.2 ', N'<EVENT_INSTANCE><EventType>CREATE_TABLE</EventType><PostTime>2009-04-22T11:25:19.053</PostTime><SPID>57</SPID><ServerName>YSG</ServerName><LoginName>ysg\yordgeor</LoginName><UserName>dbo</UserName><DatabaseName>DBGA_DEV</DatabaseName><SchemaName>en</SchemaName><ObjectName>tb_BL_Products</ObjectName><ObjectType>TABLE</ObjectType><TSQLCommand><SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE"/><CommandText>CREATE TABLE [en].[tb_BL_Products](
[ProducId] [int] NULL,
[ProductName] [nchar](10) NULL,
[ProductDescription] [varchar](5000) NULL
) ON [PRIMARY]
/*
<Version> 2.2.2 </Version>

*/
</CommandText></TSQLCommand></EVENT_INSTANCE>', CAST(0x00009BF300BC3A69 AS DateTime), N'ysg\yordgeor', N'Yordan', N'Georgiev', NULL, NULL, N'0.0.0')
SET IDENTITY_INSERT [ga].[tb_DataMeta_ObjChangeLog] OFF
/****** Object: Table [ga].[tb_BLSec_LoginsForUsers] Script Date: 04/22/2009 13:21:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [ga].[tb_BLSec_LoginsForUsers](
[LoginsForUsersId] [int] IDENTITY(1,1) NOT NULL,
[LoginName] [nvarchar](100) NOT NULL,
[FirstName] [varchar](100) NOT NULL,
[SecondName] [varchar](100) NULL,
[LastName] [varchar](100) NOT NULL,
[DomainName] [varchar](100) NOT NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
SET IDENTITY_INSERT [ga].[tb_BLSec_LoginsForUsers] ON
INSERT [ga].[tb_BLSec_LoginsForUsers] ([LoginsForUsersId], [LoginName], [FirstName], [SecondName], [LastName], [DomainName]) VALUES (1, N'ysg\yordgeor', N'Yordan', N'Stanchev', N'Georgiev', N'yordgeor')
SET IDENTITY_INSERT [ga].[tb_BLSec_LoginsForUsers] OFF
/****** Object: Table [en].[tb_BL_Products] Script Date: 04/22/2009 13:21:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [en].[tb_BL_Products](
[ProducId] [int] NULL,
[ProductName] [nchar](10) NULL,
[ProductDescription] [varchar](5000) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: StoredProcedure [ga].[procUtils_SqlCheatSheet] Script Date: 04/22/2009 13:21:37 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [ga].[procUtils_SqlCheatSheet]
as
set nocount on
--what was the name of the table with something like role
/*
SELECT * from sys.tables where [name] like '%POC%'
*/
-- what are the columns of this table
/*
select column_name , DATA_TYPE , CHARACTER_MAXIMUM_LENGTH, table_name from Information_schema.columns where table_name='tbGui_ExecutePOC'
*/
-- find proc
--what was the name of procedure with something like role
/*
select * from sys.procedures where [name] like '%ext%'
exec sp_HelpText procName
*/
/*
exec sp_helpText procUtils_InsertGenerator
*/
--how to list all databases in sql server
/*
SELECT database_id AS ID, NULL AS ParentID, name AS Text FROM sys.databases ORDER BY [name]
*/
--HOW-TO LIST ALL TABLES IN A SQL SERVER 2005 DATABASE
/*
SELECT TABLE_NAME FROM [POC].INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
AND TABLE_NAME <> 'dtproperties'
ORDER BY TABLE_NAME
*/
--HOW-TO ENABLE XP_CMDSHELL START
-------------------------------------------------------------------------
-- configure verbose mode temporarily
-- EXECUTE sp_configure 'show advanced options', 1
-- RECONFIGURE WITH OVERRIDE
--GO
--ENABLE xp_cmdshell
-- EXECUTE sp_configure 'xp_cmdshell', '1'
-- RECONFIGURE WITH OVERRIDE
-- EXEC SP_CONFIGURE 'show advanced option', '1';
-- SHOW THE CONFIGURATION
-- EXEC SP_CONFIGURE;
--turn show advance options off
-- GO
--EXECUTE sp_configure 'show advanced options', 0
-- RECONFIGURE WITH OVERRIDE
-- GO
--HOW-TO ENABLE XP_CMDSHELL END
-------------------------------------------------------------------------
--HOW-TO IMPLEMENT SLEEP
-- sleep for 10 seconds
-- WAITFOR DELAY '00:00:10' SELECT * FROM My_Table
/* LIST ALL PRIMARY KEYS
SELECT
INFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_NAME AS TABLE_NAME,
INFORMATION_SCHEMA.KEY_COLUMN_USAGE.COLUMN_NAME AS COLUMN_NAME,
REPLACE(INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_TYPE,' ', '_') AS CONSTRAINT_TYPE
FROM
INFORMATION_SCHEMA.TABLE_CONSTRAINTS
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE ON
INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_NAME =
INFORMATION_SCHEMA.KEY_COLUMN_USAGE.CONSTRAINT_NAME
WHERE
INFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_NAME <> N'sysdiagrams'
ORDER BY
INFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_NAME ASC
*/
--HOW-TO COPY TABLE AND THE WHOLE TABLE DATA , COPY TABLE FROM DB TO DB
--==================================================START
/*
use Poc_Dev
go
drop table tbGui_LinksVisibility
use POc_test
go
select *
INTO [POC_Dev].[ga].[tbGui_LinksVisibility]
from [POC_TEST].[ga].[tbGui_LinksVisibility]
*/
--HOW-TO COPY TABLE AND THE WHOLE TABLE DATA , COPY TABLE FROM DB TO DB
--====================================================END
--=================================================== SEE TABLE METADATA START
/*
SELECT c.name AS [COLUMN_NAME], sc.data_type AS [DATA_TYPE], [value] AS
[DESCRIPTION] , c.max_length as [MAX_LENGTH] , c.is_nullable AS [OPTIONAL]
, c.is_identity AS [IS_PRIMARY_KEY] FROM sys.extended_properties AS ep
INNER JOIN sys.tables AS t ON ep.major_id = t.object_id
INNER JOIN sys.columns AS c ON ep.major_id = c.object_id AND ep.minor_id
= c.column_id
INNER JOIN INFORMATION_SCHEMA.COLUMNS sc ON t.name = sc.table_name and
c.name = sc.column_name
WHERE class = 1 and t.name = 'tbGui_ExecutePOC' ORDER BY SC.DATA_TYPE
*/
--=================================================== SEE TABLE METADATA END
/*
select * from Information_schema.columns
select table_name , column_name from Information_schema.columns where table_name='tbGui_Wizards'
*/
--=================================================== LIST ALL TABLES AND THEIR DESCRIPTOINS START
/*
SELECT T.name AS TableName, CAST(Props.value AS varchar(1000)) AS
TableDescription
FROM sys.tables AS T LEFT OUTER JOIN
(SELECT class, class_desc, major_id, minor_id,
name, value
FROM sys.extended_properties
WHERE (minor_id = 0) AND (class = 1)) AS
Props ON T.object_id = Props.major_id
WHERE (T.type = 'U') AND (T.name <> N'sysdiagrams')
ORDER BY TableName
*/
--=================================================== LIST ALL TABLES AND THEIR DESCRIPTOINS START
--=================================================== LIST ALL OBJECTS FROM DB START
/*
use DB
--HOW-TO LIST ALL PROCEDURE IN A DATABASE
select s.name from sysobjects s where type = 'P'
--HOW-TO LIST ALL TRIGGERS BY NAME IN A DATABASE
select s.name from sysobjects s where type = 'TR'
--HOW-TO LIST TABLES IN A DATABASE
select s.name from sysobjects s where type = 'U'
--how-to list all system tables in a database
select s.name from sysobjects s where type = 's'
--how-to list all the views in a database
select s.name from sysobjects s where type = 'v'
*/
/*
Similarly you can find out other objects created by user, simple change type =
C = CHECK constraint
D = Default or DEFAULT constraint
F = FOREIGN KEY constraint
L = Log
FN = Scalar function
IF = In-lined table-function
P = Stored procedure
PK = PRIMARY KEY constraint (type is K)
RF = Replication filter stored procedure
S = System table
TF = Table function
TR = Trigger
U = User table ( this is the one I discussed above in the example)
UQ = UNIQUE constraint (type is K)
V = View
X = Extended stored procedure
*/
--=================================================== HOW-TO SEE ALL MY PERMISSIONS START
/*
SELECT * FROM fn_my_permissions(NULL, 'SERVER');
USE poc_qa;
SELECT * FROM fn_my_permissions (NULL, 'database');
GO
*/
--=================================================== HOW-TO SEE ALL MY PERMISSIONS END
/*
--find table
use poc_dev
go
select s.name from sysobjects s where type = 'u' and s.name like '%Visibility%'
select * from tbGui_LinksVisibility
*/
/* find cursor
use poc
go
DECLARE @procName varchar(100)
DECLARE @cursorProcNames CURSOR
SET @cursorProcNames = CURSOR FOR
select name from sys.procedures where modify_date > '2009-02-05 13:12:15.273' order by modify_date desc
OPEN @cursorProcNames
FETCH NEXT
FROM @cursorProcNames INTO @procName
WHILE @@FETCH_STATUS = 0
BEGIN
set nocount off;
exec sp_HelpText @procName --- or print them
-- print @procName
FETCH NEXT
FROM @cursorProcNames INTO @procName
END
CLOSE @cursorProcNames
select @@error
*/
/* -- SEE STORED PROCEDURE EXT PROPS
SELECT ep.name as 'EXT_PROP_NAME' , SP.NAME , [value] as 'DESCRIPTION' FROM sys.extended_properties as ep left join sys.procedures as sp on sp.object_id = ep.major_id where sp.type='P'
-- what the hell I ve been doing lately on sql server 2005 / 2008
select o.name ,
(SELECT [definition] AS [text()] FROM sys.all_sql_modules where sys.all_sql_modules.object_id=a.object_id FOR XML PATH(''), TYPE) AS Statement_Text
, a.object_id, o.modify_date from sys.all_sql_modules a left join sys.objects o on a.object_id=o.object_id order by 4 desc
-- GET THE RIGHT LANG SCHEMA START
DECLARE @template AS varchar(max)
SET @template = 'SELECT * FROM {object_name}'
DECLARE @object_name AS sysname
SELECT @object_name = QUOTENAME(s.name) + '.' + QUOTENAME(o.name)
FROM sys.objects o
INNER JOIN sys.schemas s
ON s.schema_id = o.schema_id
WHERE o.object_id = OBJECT_ID(QUOTENAME(@LANG) + '.[TestingLanguagesInNameSpacesDelMe]')
IF @object_name IS NOT NULL
BEGIN
DECLARE @sql AS varchar(max)
SET @sql = REPLACE(@template, '{object_name}', @object_name)
EXEC (@sql)
END
-- GET THE RIGHT LANG SCHEMA END
-- SEE STORED PROCEDURE EXT PROPS end*/
set nocount off
GO
EXEC sys.sp_addextendedproperty @name=N'AuthorName', @value=N'Yordan Georgiev' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'PROCEDURE',@level1name=N'procUtils_SqlCheatSheet'
GO
EXEC sys.sp_addextendedproperty @name=N'ProcDescription', @value=N'TODO:ADD HERE DESCRPIPTION' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'PROCEDURE',@level1name=N'procUtils_SqlCheatSheet'
GO
EXEC sys.sp_addextendedproperty @name=N'ProcVersion', @value=N'0.1.0.20090406.1317' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'PROCEDURE',@level1name=N'procUtils_SqlCheatSheet'
GO
/****** Object: UserDefinedFunction [ga].[GetDbVersion] Script Date: 04/22/2009 13:21:42 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [ga].[GetDbVersion]()
RETURNS VARCHAR(20)
BEGIN
RETURN convert(varchar(20) , (select value from sys.extended_properties where name='DbVersion' and class_desc='DATABASE') )
END
GO
EXEC sys.sp_addextendedproperty @name=N'AuthorName', @value=N'Yordan Georgiev' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'FUNCTION',@level1name=N'GetDbVersion'
GO
EXEC sys.sp_addextendedproperty @name=N'ChangeDescription', @value=N'Initial creation' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'FUNCTION',@level1name=N'GetDbVersion'
GO
EXEC sys.sp_addextendedproperty @name=N'CreatedWhen', @value=N'getDate()' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'FUNCTION',@level1name=N'GetDbVersion'
GO
EXEC sys.sp_addextendedproperty @name=N'Description', @value=N'Gets the current version of the database ' , @level0type=N'SCHEMA',@level0name=N'ga', @level1type=N'FUNCTION',@level1name=N'GetDbVersion'
GO
/****** Object: UserDefinedFunction [ga].[GetDbType] Script Date: 04/22/2009 13:21:42 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [ga].[GetDbType]()
RETURNS VARCHAR(30)
BEGIN
RETURN convert(varchar(30) , (select value from sys.extended_properties where name='DbType' and class_desc='DATABASE') )
END
GO
/****** Object: Default [DF_tb_DataMeta_ObjChangeLog_DbVersion] Script Date: 04/22/2009 13:21:40 ******/
ALTER TABLE [ga].[tb_DataMeta_ObjChangeLog] ADD CONSTRAINT [DF_tb_DataMeta_ObjChangeLog_DbVersion] DEFAULT ('select ga.GetDbVersion()') FOR [DbVersion]
GO
/****** Object: Default [DF_tb_DataMeta_ObjChangeLog_EventDate] Script Date: 04/22/2009 13:21:40 ******/
ALTER TABLE [ga].[tb_DataMeta_ObjChangeLog] ADD CONSTRAINT [DF_tb_DataMeta_ObjChangeLog_EventDate] DEFAULT (getdate()) FOR [EventDate]
GO
/****** Object: Default [DF_tb_DataMeta_ObjChangeLog_ObjVersion] Script Date: 04/22/2009 13:21:40 ******/
ALTER TABLE [ga].[tb_DataMeta_ObjChangeLog] ADD CONSTRAINT [DF_tb_DataMeta_ObjChangeLog_ObjVersion] DEFAULT ('0.0.0') FOR [ObjVersion]
GO
/****** Object: DdlTrigger [trigMetaDoc_TraceDbChanges] Script Date: 04/22/2009 13:21:29 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create trigger [trigMetaDoc_TraceDbChanges]
on database
for create_procedure, alter_procedure, drop_procedure,
create_table, alter_table, drop_table,
create_function, alter_function, drop_function ,
create_trigger , alter_trigger , drop_trigger
as
set nocount on
declare @data xml
set @data = EVENTDATA()
declare @DbVersion varchar(20)
set @DbVersion =(select ga.GetDbVersion())
declare @DbType varchar(20)
set @DbType =(select ga.GetDbType())
declare @DbName varchar(256)
set @DbName =@data.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(256)')
declare @EventType varchar(256)
set @EventType =@data.value('(/EVENT_INSTANCE/EventType)[1]', 'varchar(50)')
declare @ObjectName varchar(256)
set @ObjectName = @data.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(256)')
declare @ObjectType varchar(25)
set @ObjectType = @data.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(25)')
declare @TSQLCommand varchar(max)
set @TSQLCommand = @data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'varchar(max)')
declare @opentag varchar(4)
set @opentag= '<'
declare @closetag varchar(4)
set @closetag= '>'
declare @newDataTxt varchar(max)
set @newDataTxt= cast(@data as varchar(max))
set @newDataTxt = REPLACE ( REPLACE(@newDataTxt , @opentag , '<') , @closetag , '>')
-- print @newDataTxt
declare @newDataXml xml
set @newDataXml = CONVERT ( xml , @newDataTxt)
declare @Version varchar(50)
set @Version = @newDataXml.value('(/EVENT_INSTANCE/TSQLCommand/CommandText/Version)[1]', 'varchar(50)')
-- if we are dropping take the version from the existing object
if ( SUBSTRING(@EventType , 0 , 5)) = 'DROP'
set @Version =( select top 1 [Version] from ga.tb_DataMeta_ObjChangeLog where ObjectName=@ObjectName order by [LogId] desc)
declare @LoginName varchar(256)
set @LoginName = @data.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(256)')
declare @FirstName varchar(50)
set @FirstName= (select [FirstName] from [ga].[tb_BLSec_LoginsForUsers] where [LoginName] = @LoginName)
declare @LastName varchar(50)
set @LastName = (select [LastName] from [ga].[tb_BLSec_LoginsForUsers] where [LoginName] = @LoginName)
declare @SchemaName sysname
set @SchemaName = @data.value('(/EVENT_INSTANCE/SchemaName)[1]', 'sysname');
--declare @Description xml
--set @Description = @data.query('(/EVENT_INSTANCE/TSQLCommand/text())')
--print 'VERSION IS ' + @Version
--print @newDataTxt
--print cast(@data as varchar(max))
-- select column_name from information_schema.columns where table_name ='tb_DataMeta_ObjChangeLog'
insert into [ga].[tb_DataMeta_ObjChangeLog]
(
[DatabaseName] ,
[SchemaName],
[DbVersion] ,
[DbType],
[EventType],
[ObjectName],
[ObjectType] ,
[Version],
[SqlCommand] ,
[LoginName] ,
[FirstName],
[LastName]
)
values(
@DbName,
@SchemaName,
@DbVersion,
@DbType,
@EventType,
@ObjectName,
@ObjectType ,
@Version,
@newDataTxt,
@LoginName ,
@FirstName ,
@LastName
)
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
DISABLE TRIGGER [trigMetaDoc_TraceDbChanges] ON DATABASE
GO
/****** Object: DdlTrigger [trigMetaDoc_TraceDbChanges] Script Date: 04/22/2009 13:21:29 ******/
Enable Trigger [trigMetaDoc_TraceDbChanges] ON Database
GO
A: If your Database is SQL Server, we might have just the solution you're looking for. SQL Source Control 1.0 has now been released.
http://www.red-gate.com/products/SQL_Source_Control/index.htm
This integrates into SSMS and provides the glue between your database objects and your VCS. The 'scripting out' happens transparently (it uses the SQL Compare engine under the hood), which should make it so straightforward to use that developers won't be discouraged from adopting the process.
An alternative Visual Studio solution is ReadyRoll, which is implemented as a sub-type of the SSDT Database Project. This takes a migrations-driven approach, which is more suited to the automation requirements of DevOps teams.
A: While this question has many good answers, most of them don’t include the innovations changes in the market, specifically with commercial tools.
Here is a short list of tools that do database version control, I listed the pros and cons of each (full discloser: I work for DBmaestro)
Red-Gate – has been on the market for many years. It provides version control of database objects using scripts integrated with file-based version control.
DBVS – provides version control of the database objects using scripts integrated with file-based version control.
DBmaestro – Provides an enforcement of the version control processes (check-out / check-in) on the real database objects. So there is no question if the version control repository is in-sync with the database being used by the application.
I would encourage you to read a comprehensive, unbiased review on Database Enforced Change Management solutions by veteran database expert Ben Taylor which he posted on LinkedIn https://www.linkedin.com/pulse/article/20140907002729-287832-solve-database-change-mangement-with-dbmaestro
A: Yes. Code is code. My rule of thumb is that I need to be able to build and deploy the application from scratch, without looking at a development or production machine.
A: Yes ... our databases are designed in ERwin and the DDLs for each version are automatically generated. The ERwin files are kept in our source code control system (actually, so are our engineering documents).
A: We use replication and clustering to manage our databases, as well as backups. We use Serena to manage our SQL scripts and configuration implementations. Before a configuration change is made, we perform a backup as part of the change management process. This backup satisfies our rollback requirement.
I think it all depends on scale. Are you talking about enterprise applications that need offsite backups and disaster recovery? A small workgroup running an accounting application? Or everywhere in between?
A: We have our Create/Alter scripts under source control. As for the database itself, when you have hundreds of tables and a lot of processing data every minutes, it would be CPU and HDD killer to version all the database. That's why backup is still, according to me, the best way to control your data.
A: I always check my database structure dumps into source control. Full database dumps however I normally just compress and put away for storage.
A: My team versions our database schema as C# classes with the rest of our code. We have a homegrown C# program (<500 lines of code) that reflects the classes and creates SQL commands to build, drop and update the database. After creating the database we run sqlmetal to generate a linq mapping, which is then compiled in another project that is used to generate test data. The whole things works really well because data access is checked at compile time. We like it because the schema is stored in a .cs file which is easy to track compare in trac/svn.
A: RedGate software makes some great tools that will help you version your database. Be sure to try to have your devs build their own isolated local databases for dev work rather than rely on a "dev server" which may or may not be down at some time.
A: I have used RedGate SQL Compare Pro for schema synchronization with script folder, then I commit all my update to version control. It works great.
A: "Short version: dump your production database into a git repository for an instant backup solution."
A: I've started working on sqlHawk which is aimed at providing (open source) tooling around this problem.
It's currently in fairly early stages, but does already support storing and enforcing stored procedures and running scripted updates.
I'd be grateful for any input from anyone who has the time to look at this tool.
Apologies for blatant self promotion, but I hope this is useful to someone!
A: I agree with many of the posting concerning ruby's ActiveRecord migrations - they are an elegant way to manage the database in small incremental files that everyone can share. With that said, I've recently implemented a project using VisualStudio's Database Project, and it's kinda made me a believer. Short story - you create a database project, import all (if any) existing database objects into it (tables/views/triggers/keys/users/etc). That import results in a "Create" script per object. To manage the database you alter the create script and then on deploy VS compares the target database to the state of the database residing in your project and apply the proper alter statements.
It really is a bit of magic and I have to admit, it's one of the better things the VS team has done. I'm really impressed up to this point.
Of course, you can manage the whole database project in the version control system of your choice.
A: Wow, so many answers. For solid database versioning you need to version control the code that changes your database. Some CMS offer configuration management tools, such as the one in Drupal 8. Here is an overview with practical steps to arrange your workflow and ensure the database configuration is versioned, even in team environments:
A: The databases themselves? No
The scripts that create them, including static data inserts, stored procedures and the like; of course. They're text files, they are included in the project and are checked in and out like everything else.
Of course in an ideal world your database management tool would do this; but you just have to be disciplined about it.
A: The best practice I have seen is creating a build script to scrap and rebuild your database on a staging server. Each iteration was given a folder for database changes, all changes were scripted with "Drop... Create" 's . This way you can rollback to an earlier version at any time by pointing the build to folder you want to version to.
I believe this was done with NaNt/CruiseControl.
A: YES, I think it is important to version your database. Not the data, but the schema for certain.
In Ruby On Rails, this is handled by the framework with "migrations". Any time you alter the db, you make a script that applies the changes and check it into source control.
My shop liked that idea so much that we added the functionality to our Java-based build using shell scripts and Ant. We integrated the process into our deployment routine. It would be fairly easy to write scripts to do the same thing in other frameworks that don't support DB versioning out-of-the-box.
A: We have a weekly sql dump into a subversion repo. It's fully automated but it's a REALLY beefy task.
You'll want to limit the number of revisions because it really chows disk space after a while!
A: I version control the create script, and I use the svn version tag within it. Then, whenever I get a version that is going to be used, I create a script in a dbpatches/ directory named as the version to roll up to. The job of that script is to modify a current database without destroying the data. dbpatches/, for example, might have files named 201, 220, and 240. If the database is currently at level 201, apply patch 220, then patch 240.
DROP TABLE IF EXISTS `meta`;
CREATE TABLE `meta` (
`property` varchar(255),
`value` varchar(255),
PRIMARY KEY (`property`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `meta` VALUES ('version', '$Rev: 240 $');
Don't forget to test your code before considering a patch good. Caveat emptor!
A: As a rule, we keep all of our object code (stored procedures, views, triggers, functions, etc.) in source control because these objects are code, and as about every other answer here agrees, code belongs in some form of version control system.
As for CREATE, DROP, ALTER statements, etc. (DDL), we developed and use BuildMaster to manage the deployment of these scripts such that they can be run once and only once against a target database (whether they fail or not). The general idea is that developers will upload change scripts into the system and when it comes time for deployment, only the change scripts that haven't been run against the target environment's database will be run (this is managed very similarly to Autocracy's answer). The reason for this separation of script types lies in that once you manipulate a table's structure, add an index, etc., you effectively cannot undo that without writing a brand new script, or restoring the database - as opposed to the object code where you can simply drop a view or stored procedure then recreate it.
Some of the benefits can be seen when, for example, you restore your production database into your integration environment, the system automatically knows exactly which scripts haven't been run and will alter the table structure of that newly restored database to be current with regards to development.
A: We insist upon change scrips and a master data definition script. These are checked into CVS along with any other source code. The PL/SQL (were are an Oracle shop) is also source controlled in CVS. The change scripts are repeatable and can be passed to everyone on the team. Basically, just because it is a database, there is never an excuse not to code it and use a source control system to track the changes.
A: We maintain DDL (and sometime DML) scripts generated by our ER Tool (PowerAMC).
We have a bench of shell scripts which rename the scripts starting with a number on the trunk branch.
Each script is committed and tagged with the bugzilla number.
These scripts are then at need merged within the release branches along with the application code.
We have a table recording the scripts and their status.
Each script is executed in order and recorded in this table on each install by the deploying tool.
A: Your project team can have a DBA to whom every developer would forward their create alter, delete, insert/update (for master data) sql statements. DBAs would run those queries and on successfully making the required update would add those statements to a text file or a spreadsheet. Each addition can be labeled as a savepoint. Incase you revert back to a particular savepoint, just do a drop all and run the queries uptil the labelled savepoint. This approach is just a thought... a bit of fine tuning here would work for your development environment.
A: Any database interface code absolutely should go into version control (Stored Procedures, Functions, etc).
For structure and data, it is a judgement call. I personally keep a clean structural template of my databases around, but don't store them in version control, due to the size. But storing it in version control can be very beneficial, even for just having a history.
A: A big problem, often overlooked, is that for larger web based systems, it is required to have a transitional period or bucket testing approach to making new releases. This makes it essential to have both rollback and a mechanism for supporting both the old and new schema in the same DB. This requires a scaffolding approach (made populist by the Agile DB folks). In this scenario, lack of process in DB source control can be a total disaster. You need old schema scripts, new schema scripts and a set of intermediate scripts, as well as a tidy up, once the system is fully on the new version (or rolled back).
Rather than having scripts to recreate schema from scratch, what is required is a state based approach, where you need scripts purely to move the DB into the state you require, both forward and back, from version to version. Your DB becomes a series of state scripts, which can be easily source controlled and tagged along with the rest of the source.
A: Yes, of course. We generate dumps of our PostgreSQL schemas whenever there's a change and check it in. It's already saved us many times, and I've only been at my job a few months.
A: Sadly, I've seen more than one team developing PL/SQL programs (stored procedures in Oracle) - sometimes ten thousands LOC - just by editing the code in TOAD (a database tool), without even saving the source to files (except for deployment). Even if the database is backuped regulary (wouldn't take that for granted, though), the only way to retrieve an old version of a stored procedure is to restore the whole database, which is many GB large. And of course sometimes concurrent changes in one file lead to loss of work, when more than one developer works on the same project.
A: I use ActiveRecord Migrations. This Ruby gem can be used outside of a Rails project and there are adapters to handle most databases you'll come across. My tip: if you are able to run your project off Postgres, you get transactional schema migrations. That means you don't end up with a broken database if a migration only half-applies.
A: One of Kira's prime use cases is database upgrades by explicitly specify the schema outside the database as code. It then can manage the database and upgrade it to any version from any version.
A: Yes, we source control our sql scripts too with subversion. It's a good practice and you can recreate the schema with default data whenever needed.
A: For oracle I use self-written java programm oracle-ddl2svn for auto track changes of oracle DDL scheme in SVN
A: I've heard people say you absolutely have to keep your schemas in the database. I'm not sure I agree. This really depends on the system you're working with. If your system is relatively small and the data is not terribly important. And the the speed at which you need to bring another development environment online is crucial.. then yes.. you can benefit from it. However when your schema is useless without the data and the database is extremely large, it becomes virtually impossible to "source control" your database. Sure, you can still keep your DDL code in source control but that's essentially useless. You can't get the data needed without backup/restore.
In larger database development efforts, I've found backup and restore as the preferred rollback option. Sure you can keep procs, views, functions etc in source control but keeping table.sql is not necessary. Also, if your deployment process is airtight, you'll most likely never have to "rollback" your production environment.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "612"
}
|
Q: How can I customize the syntax highlighting in Visual Studio 2005? Is it possible to customize the syntax highlighting in Visual Studio 2005? Up to now, I found no way to do this.
In eclipse for example, it's possible to setup custom highlighting.
A: Do you mean more than just changing the colors -- like defining a new language with its own symbols and keywords? If so,
http://msdn.microsoft.com/en-us/library/bb165041(VS.80).aspx
A: Yes, see Tools...Options....Fonts and Colors.
Here is a theme gallery if you want to browse some options.
A: For C++, you can add a file called usertype.dat that contains your list of custom keywords.
This MSDN page has a few more details.
A: This is an great free library: http://studiostyl.es/
Import the settings Via Tools->Import/Export Settings
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/115388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.